• No results found

Bayes Theorem

N/A
N/A
Protected

Academic year: 2022

Share "Bayes Theorem"

Copied!
50
0
0

Loading.... (view fulltext now)

Full text

(1)

Bayesian Learning

[Read Ch. 6]

[Suggested exercises: 6.1, 6.2, 6.6]

Bayes Theorem

MAP, ML hypotheses

MAP learners

Minimum description length principle

Bayes optimal classier

Naive Bayes learner

Example: Learning over text data

Bayesian belief networks

Expectation Maximization algorithm

(2)

Two Roles for Bayesian Methods

Provides practical learning algorithms:

Naive Bayes learning

Bayesian belief network learning

Combine prior knowledge (prior probabilities) with observed data

Requires prior probabilities

Provides useful conceptual framework

Provides \gold standard" for evaluating other learning algorithms

Additional insight into Occam's razor

(3)

Bayes Theorem

P

(

h

j

D

) =

P

(

D

j

h

)

P

(

h

)

P

(

D

)

P

(

h

) = prior probability of hypothesis

h

P

(

D

) = prior probability of training data

D

P

(

h

j

D

) = probability of

h

given

D

P

(

D

j

h

) = probability of

D

given

h

(4)

Choosing Hypotheses

P

(

h

j

D

) =

P

(

D

j

h

)

P

(

h

)

P

(

D

)

Generally want the most probable hypothesis given the training data

Maximum a posteriori hypothesis

h

MAP:

h

MAP = argmaxh

2H

P

(

h

j

D

)

= argmaxh

2H

P

(

D

j

h

)

P

(

h

)

P

(

D

)

= argmaxh

2H

P

(

D

j

h

)

P

(

h

)

If assume

P

(

h

i) =

P

(

h

j) then can further simplify, and choose the Maximum likelihood (ML)

hypothesis

h

ML = arg maxh

i

2H

P

(

D

j

h

i)

(5)

Bayes Theorem

Does patient have cancer or not?

A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the

cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not present.

Furthermore,

:

008 of the entire population have this cancer.

P

(

cancer

) =

P

(:

cancer

) =

P

(+j

cancer

) =

P

(?j

cancer

) =

P

(+j:

cancer

) =

P

(?j:

cancer

) =

(6)

Basic Formulas for Probabilities

Product Rule: probability

P

(

A

^

B

) of a conjunction of two events A and B:

P

(

A

^

B

) =

P

(

A

j

B

)

P

(

B

) =

P

(

B

j

A

)

P

(

A

)

Sum Rule: probability of a disjunction of two events A and B:

P

(

A

_

B

) =

P

(

A

) +

P

(

B

) ?

P

(

A

^

B

)

Theorem of total probability: if events

A

1

;:::;A

n

are mutually exclusive with Pni=1

P

(

A

i) = 1, then

P

(

B

) = iXn

=1

P

(

B

j

A

i)

P

(

A

i)

(7)

Brute Force MAP Hypothesis Learner

1. For each hypothesis

h

in

H

, calculate the posterior probability

P

(

h

j

D

) =

P

(

D

j

h

)

P

(

h

)

P

(

D

)

2. Output the hypothesis

h

MAP with the highest posterior probability

h

MAP = argmaxh

2H

P

(

h

j

D

)

(8)

Relation to Concept Learning

Consider our usual concept learning task

instance space

X

, hypothesis space

H

, training examples

D

consider the FindS learning algorithm (outputs most specic hypothesis from the version space

V S

H;D)

What would Bayes rule produce as the MAP hypothesis?

Does

FindS

output a MAP hypothesis??

(9)

Relation to Concept Learning

Assume xed set of instances h

x

1

;:::;x

mi

Assume

D

is the set of classications

D

= h

c

(

x

1)

;:::;c

(

x

m)i

Choose

P

(

D

j

h

):

(10)

Relation to Concept Learning

Assume xed set of instances h

x

1

;:::;x

mi

Assume

D

is the set of classications

D

= h

c

(

x

1)

;:::;c

(

x

m)i

Choose

P

(

D

j

h

)

P

(

D

j

h

) = 1 if

h

consistent with

D

P

(

D

j

h

) = 0 otherwise

Choose

P

(

h

) to be uniform distribution

P

(

h

) = jH1j for all

h

in

H

Then,

P

(

h

j

D

) =

8

>

>

>

>

>

>

>

<

>

>

>

>

>

>

>

:

1

jV SH ;Dj if

h

is consistent with

D

0 otherwise

(11)

Evolution of Posterior Probabilities

hypotheses hypotheses hypotheses

P(h|D1, D2) P(h|D1)

P h )(

a

( ) ( )b ( )c

(12)

Characterizing Learning Algorithms by Equivalent MAP Learners

Inductive system

Output hypotheses

Output hypotheses Brute force

MAP learner Candidate Elimination Algorithm

Prior assumptions made explicit

P(h) uniform

P(D|h) = 0 if inconsistent, = 1 if consistent

Equivalent Bayesian inference system Training examples D

Hypothesis space H

Hypothesis space H Training examples D

(13)

Learning A Real Valued Function

hML f

e y

Consider any real-valued target functionx

f

Training examples h

x

i

;d

ii, where

d

i is noisy training value

d

i =

f

(

x

i) +

e

i

e

i is random variable (noise) drawn

independently for each

x

i according to some Gaussian distribution with mean=0

Then the maximum likelihood hypothesis

h

ML is the one that minimizes the sum of squared errors:

h

ML = argminh

2H Xm

i (

d

i ?

h

(

x

i))2

(14)

Learning A Real Valued Function

h

ML = argmaxh

2H

p

(

D

j

h

)

= argmaxh

2H

m

Y

i=1

p

(

d

ij

h

)

= argmaxh

2H

m

Y

i=1 1

p2

2

e

?12(di?h(x i))2 Maximize natural log of this instead...

h

ML = argmaxh

2H

m

X

i=1 ln 1p2

2 ? 12

0

B

B

@

d

i ?

h

(

x

i)

1

C

C

A 2

= argmaxh

2H

m

X

i=1 ?1 2

0

B

B

@

d

i ?

h

(

x

i)

1

C

C

A 2

= argmaxh

2H

m

X

i=1 ?(

d

i ?

h

(

x

i))2

= argminh

2H

m

X

i=1 (

d

i ?

h

(

x

i))2

(15)

Learning to Predict Probabilities

Consider predicting survival probability from patient data

Training examples h

x

i

;d

ii, where

d

i is 1 or 0 Want to train neural network to output a probability given

x

i (not a 0 or 1)

In this case can show

h

ML = argmaxh

2H

m

X

i=1

d

i ln

h

(

x

i) + (1 ?

d

i)ln(1 ?

h

(

x

i)) Weight update rule for a sigmoid unit:

w

jk

w

jk +

w

jk

where

w

jk =

iXm

=1

(

d

i ?

h

(

x

i))

x

ijk

(16)

Minimum Description Length Principle

Occam's razor: prefer the shortest hypothesis MDL: prefer the hypothesis

h

that minimizes

h

MDL = argminh

2H

L

C1(

h

) +

L

C2(

D

j

h

)

where

L

C(

x

) is the description length of

x

under encoding

C

Example:

H

= decision trees,

D

= training data labels

L

C1(

h

) is # bits to describe tree

h

L

C2(

D

j

h

) is # bits to describe

D

given

h

{

Note

L

C2(

D

j

h

) = 0 if examples classied

perfectly by

h

. Need only describe exceptions

Hence

h

MDL trades o tree size for training errors

(17)

Minimum Description Length Principle

h

MAP = arg maxh

2H

P

(

D

j

h

)

P

(

h

)

= arg maxh

2H log2

P

(

D

j

h

) + log2

P

(

h

)

= arg minh

2H ? log2

P

(

D

j

h

) ? log2

P

(

h

) (1) Interesting fact from information theory:

The optimal (shortest expected coding

length) code for an event with probability

p

is

? log2

p

bits.

So interpret (1):

? log2

P

(

h

) is length of

h

under optimal code

? log2

P

(

D

j

h

) is length of

D

given

h

under optimal code

! prefer the hypothesis that minimizes

length

(

h

) +

length

(

misclassifications

)

(18)

Most Probable Classication of New Instances So far we've sought the most probable hypothesis given the data

D

(i.e.,

h

MAP)

Given new instance

x

, what is its most probable classication?

h

MAP(

x

) is not the most probable classication!

Consider:

Three possible hypotheses:

P

(

h

1j

D

) =

:

4

; P

(

h

2j

D

) =

:

3

; P

(

h

3j

D

) =

:

3

Given new instance

x

,

h

1(

x

) = +

; h

2(

x

) = ?

; h

3(

x

) = ?

What's most probable classication of

x

?

(19)

Bayes Optimal Classier Bayes optimal classication:

arg maxv

j 2V

X

hi2H

P

(

v

jj

h

i)

P

(

h

ij

D

) Example:

P

(

h

1j

D

) =

:

4

; P

(?j

h

1) = 0

; P

(+j

h

1) = 1

P

(

h

2j

D

) =

:

3

; P

(?j

h

2) = 1

; P

(+j

h

2) = 0

P

(

h

3j

D

) =

:

3

; P

(?j

h

3) = 1

; P

(+j

h

3) = 0 therefore

X

hi2H

P

(+j

h

i)

P

(

h

ij

D

) =

:

4

X

hi2H

P

(?j

h

i)

P

(

h

ij

D

) =

:

6 and argmaxv

j 2V

X

hi2H

P

(

v

jj

h

i)

P

(

h

ij

D

) = ?

(20)

Gibbs Classier

Bayes optimal classier provides best result, but can be expensive if many hypotheses.

Gibbs algorithm:

1. Choose one hypothesis at random, according to

P

(

h

j

D

)

2. Use this to classify new instance

Surprising fact: Assume target concepts are drawn at random from

H

according to priors on

H

. Then:

E

[

error

Gibbs] 2

E

[

error

BayesOptimal]

Suppose correct, uniform prior distribution over

H

, then

Pick any hypothesis from VS, with uniform probability

Its expected error no worse than twice Bayes optimal

(21)

Naive Bayes Classier

Along with decision trees, neural networks, nearest nbr, one of the most practical learning methods.

When to use

Moderate or large training set available

Attributes that describe instances are

conditionally independent given classication Successful applications:

Diagnosis

Classifying text documents

(22)

Naive Bayes Classier

Assume target function

f

:

X

!

V

, where each instance

x

described by attributes h

a

1

;a

2

:::a

ni. Most probable value of

f

(

x

) is:

v

MAP = argmaxv

j

2V

P

(

v

jj

a

1

;a

2

:::a

n)

v

MAP = argmaxv

j

2V

P

(

a

1

;a

2

:::a

nj

v

j)

P

(

v

j)

P

(

a

1

;a

2

:::a

n)

= argmaxv

j

2V

P

(

a

1

;a

2

:::a

nj

v

j)

P

(

v

j) Naive Bayes assumption:

P

(

a

1

;a

2

:::a

nj

v

j) = Yi

P

(

a

ij

v

j) which gives

Naive Bayes classier: v

NB = argmaxv

j

2V

P

(

v

j)Yi

P

(

a

ij

v

j)

(23)

Naive Bayes Algorithm

Naive Bayes Learn(

examples

) For each target value

v

j

P

^(

v

j) estimate

P

(

v

j)

For each attribute value

a

i of each attribute

a P

^(

a

ij

v

j) estimate

P

(

a

ij

v

j)

Classify New Instance(

x

)

v

NB = argmaxv

j

2V

P

^(

v

j)aY

i

2x

P

^(

a

ij

v

j)

(24)

Naive Bayes: Example

Consider PlayTennis again, and new instance

h

Outlk

=

sun;Temp

=

cool;Humid

=

high;Wind

=

strong

i Want to compute:

v

NB = argmaxv

j

2V

P

(

v

j)Yi

P

(

a

ij

v

j)

P

(

y

)

P

(

sun

j

y

)

P

(

cool

j

y

)

P

(

high

j

y

)

P

(

strong

j

y

) =

:

005

P

(

n

)

P

(

sun

j

n

)

P

(

cool

j

n

)

P

(

high

j

n

)

P

(

strong

j

n

) =

:

021

!

v

NB =

n

(25)

Naive Bayes: Subtleties

1. Conditional independence assumption is often violated

P

(

a

1

;a

2

:::a

nj

v

j) = Yi

P

(

a

ij

v

j)

...but it works surprisingly well anyway. Note don't need estimated posteriors ^

P

(

v

jj

x

) to be correct; need only that

argmaxv

j

2V

P

^(

v

j)Yi

P

^(

a

ij

v

j) = argmaxv

j

2V

P

(

v

j)

P

(

a

1

:::;a

nj

v

j)

see [Domingos & Pazzani, 1996] for analysis

Naive Bayes posteriors often unrealistically close to 1 or 0

(26)

Naive Bayes: Subtleties

2. what if none of the training instances with target value

v

j have attribute value

a

i? Then

P

^(

a

ij

v

j) = 0, and...

P

^(

v

j) Yi

P

^(

a

ij

v

j) = 0

Typical solution is Bayesian estimate for ^

P

(

a

ij

v

j)

P

^(

a

ij

v

j)

n

c +

mp

n

+

m

where

n

is number of training examples for which

v

=

v

j,

n

c number of examples for which

v

=

v

j and

a

=

a

i

p

is prior estimate for ^

P

(

a

ij

v

j)

m

is weight given to prior (i.e. number of

\virtual" examples)

(27)

Learning to Classify Text

Why?

Learn which news articles are of interest

Learn to classify web pages by topic

Naive Bayes is among most eective algorithms What attributes shall we use to represent text documents??

(28)

Learning to Classify Text

Target concept

Interesting

? :

Document

! f+

;

?g 1. Represent each document by vector of words

one attribute per word position in document 2. Learning: Use training examples to estimate

P

(+)

P

(?)

P

(

doc

j+)

P

(

doc

j?)

Naive Bayes conditional independence assumption

P

(

doc

j

v

j) = lengthiY(doc)

=1

P

(

a

i =

w

kj

v

j)

where

P

(

a

i =

w

kj

v

j) is probability that word in position

i

is

w

k, given

v

j

one more assumption:

P

(

a

=

w v

) =

P

(

a

=

w v

)

; i;m

(29)

Learn naive Bayes text(

Examples;V

)

1. collect all words and other tokens that occur in

Examples

V ocabulary

all distinct words and other tokens in

Examples

2. calculate the required

P

(

v

j) and

P

(

w

kj

v

j) probability terms

For each target value

v

j in

V

do

{ docs

j subset of

Examples

for which the target value is

v

j

{ P

(

v

j) jExamplesjdocsjj j

{ Text

j a single document created by concatenating all members of

docs

j

{ n

total number of words in

Text

j (counting duplicate words multiple times)

{

for each word

w

k in

V ocabulary

n

k number of times word

w

k occurs in

Text

j

P

(

w

kj

v

j) n+jV ocabularynk+1 j

(30)

Classify naive Bayes text(

Doc

)

positions

all word positions in

Doc

that contain tokens found in

V ocabulary

Return

v

NB, where

v

NB = argmaxv

j

2V

P

(

v

j)i Y

2positions

P

(

a

ij

v

j)

(31)

Twenty NewsGroups

Given 1000 training documents from each group Learn to classify new documents according to which newsgroup it came from

comp.graphics misc.forsale comp.os.ms-windows.misc rec.autos comp.sys.ibm.pc.hardware rec.motorcycles

comp.sys.mac.hardware rec.sport.baseball comp.windows.x rec.sport.hockey

alt.atheism sci.space soc.religion.christian sci.crypt

talk.religion.misc sci.electronics talk.politics.mideast sci.med

talk.politics.misc talk.politics.guns

Naive Bayes: 89% classication accuracy

(32)

Article from rec.sport.hockey

Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!ogicse!uwm.edu From: xxx@yyy.zzz.edu (John Doe)

Subject: Re: This year's biggest and worst (opinion)...

Date: 5 Apr 93 09:53:39 GMT

I can only comment on the Kings, but the most obvious candidate for pleasant surprise is Alex Zhitnik. He came highly touted as a defensive

defenseman, but he's clearly much more than that.

Great skater and hard shot (though wish he were more accurate). In fact, he pretty much allowed the Kings to trade away that huge defensive

liability Paul Coffey. Kelly Hrudey is only the biggest disappointment if you thought he was any good to begin with. But, at best, he's only a mediocre goaltender. A better choice would be Tomas Sandstrom, though not through any fault of his own, but because some thugs in Toronto decided

(33)

Learning Curve for 20 Newsgroups

0 10 20 30 40 50 60 70 80 90 100

100 1000 10000

20News

Bayes TFIDF PRTFIDF

Accuracy vs. Training set size (1/3 withheld for test)

(34)

Bayesian Belief Networks

Interesting because:

Naive Bayes assumption of conditional independence too restrictive

But it's intractable without some such assumptions...

Bayesian Belief networks describe conditional independence among subsets of variables

! allows combining prior knowledge about

(in)dependencies among variables with observed training data

(also called Bayes Nets)

(35)

Conditional Independence

Denition: X

is conditionally independent of

Y

given

Z

if the probability distribution

governing

X

is independent of the value of

Y

given the value of

Z

; that is, if

(8

x

i

;y

j

;z

k)

P

(

X

=

x

ij

Y

=

y

j

;Z

=

z

k) =

P

(

X

=

x

ij

Z

=

z

k) more compactly, we write

P

(

X

j

Y;Z

) =

P

(

X

j

Z

)

Example:

Thunder

is conditionally independent of

Rain

, given

Lightning

P

(

Thunder

j

Rain;Lightning

) =

P

(

Thunder

j

Lightning

) Naive Bayes uses cond. indep. to justify

P

(

X;Y

j

Z

) =

P

(

X

j

Y;Z

)

P

(

Y

j

Z

)

=

P

(

X

j

Z

)

P

(

Y

j

Z

)

(36)

Bayesian Belief Network

Storm

Campfire Lightning

Thunder ForestFire

Campfire C

¬C

¬S,B ¬S,¬B 0.4

0.6

0.1 0.9

0.8 0.2

0.2 0.8 S,¬B

BusTourGroup

S,B

Network represents a set of conditional independence assertions:

Each node is asserted to be conditionally independent of its nondescendants, given its immediate predecessors.

Directed acyclic graph

(37)

Bayesian Belief Network

Storm

Campfire Lightning

Thunder ForestFire

Campfire C

¬C

¬S,B ¬S,¬B 0.4

0.6

0.1 0.9

0.8 0.2

0.2 0.8 S,¬B

BusTourGroup

S,B

Represents joint probability distribution over all variables

e.g.,

P

(

Storm;BusTourGroup;:::;ForestFire

)

in general,

P

(

y

1

;:::;y

n) = iYn

=1

P

(

y

ij

Parents

(

Y

i)) where

Parents

(

Y

i) denotes immediate

predecessors of

Y

i in graph

so, joint distribution is fully dened by graph, plus the

P

(

y

i

Parents

(

Y

i))

(38)

Inference in Bayesian Networks

How can one infer the (probabilities of) values of one or more network variables, given observed values of others?

Bayes net contains all information needed for this inference

If only one variable with unknown value, easy to infer it

In general case, problem is NP hard In practice, can succeed in many cases

Exact inference methods work well for some network structures

Monte Carlo methods \simulate" the network randomly to calculate approximate solutions

(39)

Learning of Bayesian Networks

Several variants of this learning task

Network structure might be known or unknown

Training examples might provide values of all network variables, or just some

If structure known and observe all variables

Then it's easy as training a Naive Bayes classier

(40)

Learning Bayes Nets

Suppose structure known, variables partially observable

e.g., observe ForestFire, Storm, BusTourGroup, Thunder, but not Lightning, Campre...

Similar to training neural network with hidden units

In fact, can learn network conditional probability tables using gradient ascent!

Converge to network

h

that (locally) maximizes

P

(

D

j

h

)

(41)

Gradient Ascent for Bayes Nets

Let

w

ijk denote one entry in the conditional probability table for variable

Y

i in the network

w

ijk =

P

(

Y

i =

y

ijj

Parents

(

Y

i) = the list

u

ik of values) e.g., if

Y

i =

Campfire

, then

u

ik might be

h

Storm

=

T;BusTourGroup

=

F

i Perform gradient ascent by repeatedly

1. update all

w

ijk using training data

D w

ijk

w

ijk +

dX

2D

P

h(

y

ij

;u

ikj

d

)

w

ijk

2. then, renormalize the

w

ijk to assure

Pj

w

ijk = 1

0

w

ijk 1

(42)

More on Learning Bayes Nets

EM algorithm can also be used. Repeatedly:

1. Calculate probabilities of unobserved variables, assuming

h

2. Calculate new

w

ijk to maximize

E

[ln

P

(

D

j

h

)]

where

D

now includes both observed and

(calculated probabilities of) unobserved variables When structure unknown...

Algorithms use greedy search to add/substract edges and nodes

Active research topic

(43)

Summary: Bayesian Belief Networks

Combine prior knowledge with observed data

Impact of prior knowledge (when correct!) is to lower the sample complexity

Active research area

{

Extend from boolean to real-valued variables

{

Parameterized distributions instead of tables

{

Extend to rst-order instead of propositional systems

{

More eective inference methods

{

...

(44)

Expectation Maximization (EM)

When to use:

Data is only partially observable

Unsupervised clustering (target value unobservable)

Supervised learning (some instance attributes unobservable)

Some uses:

Train Bayesian Belief Networks

Unsupervised clustering (AUTOCLASS)

Learning Hidden Markov Models

(45)

Generating Data from Mixture of k Gaussians

p(x)

x

Each instance

x

generated by

1. Choosing one of the

k

Gaussians with uniform probability

2. Generating an instance at random according to that Gaussian

(46)

EM for Estimating k Means

Given:

Instances from

X

generated by mixture of

k

Gaussian distributions

Unknown means h

1

;:::;

ki of the

k

Gaussians

Don't know which instance

x

i was generated by which Gaussian

Determine:

Maximum likelihood estimates of h

1

;:::;

ki

Think of full description of each instance as

y

i = h

x

i

;z

i1

;z

i2i, where

z

ij is 1 if

x

i generated by

j

th Gaussian

x

i observable

z

ij unobservable

(47)

EM for Estimating k Means

EM Algorithm: Pick random initial

h

= h

1

;

2i, then iterate

E step: Calculate the expected value

E

[

z

ij] of each hidden variable

z

ij, assuming the current hypothesis

h

= h

1

;

2i holds.

E

[

z

ij] =

p

(

x

=

x

ij

=

j)

P

2n=1

p

(

x

=

x

ij

=

n)

=

e

?212(xi?j)2

P

2n=1

e

?212(xi?n)2

M step: Calculate a new maximum likelihood hypothesis

h

0 = h

01

;

02i, assuming the value taken on by each hidden variable

z

ij is its expected value

E

[

z

ij] calculated above. Replace

h

= h

1

;

2i by

h

0 = h

01

;

02i.

j

Pmi=1

E

[

z

ij]

x

i

Pmi=1

E

[

z

ij]

(48)

EM Algorithm

Converges to local maximum likelihood

h

and provides estimates of hidden variables

z

ij

In fact, local maximum in

E

[ln

P

(

Y

j

h

)]

Y

is complete (observable plus unobservable variables) data

Expected value is taken over possible values of unobserved variables in

Y

(49)

General EM Problem

Given:

Observed data

X

= f

x

1

;:::;x

mg

Unobserved data

Z

= f

z

1

;:::;z

mg

Parameterized probability distribution

P

(

Y

j

h

), where

{ Y

= f

y

1

;:::;y

mg is the full data

y

i =

x

i [

z

i

{ h

are the parameters Determine:

h

that (locally) maximizes

E

[ln

P

(

Y

j

h

)]

Many uses:

Train Bayesian belief networks

Unsupervised clustering (e.g.,

k

means)

Hidden Markov Models

(50)

General EM Method

Dene likelihood function

Q

(

h

0j

h

) which calculates

Y

=

X

[

Z

using observed

X

and current

parameters

h

to estimate

Z

Q

(

h

0j

h

)

E

[ln

P

(

Y

j

h

0)j

h;X

] EM Algorithm:

Estimation (E) step: Calculate

Q

(

h

0j

h

) using the current hypothesis

h

and the observed data

X

to estimate the probability distribution over

Y

.

Q

(

h

0j

h

)

E

[ln

P

(

Y

j

h

0)j

h;X

]

Maximization (M) step: Replace hypothesis

h

by the hypothesis

h

0 that maximizes this

Q

function.

h

argmaxh

0

Q

(

h

0j

h

)

References

Related documents

 Allows users to access data in the relational database management systems..  Allows users to describe

Correlation between the post test knowledge and hemoglobin score among adolescent girls in the experimental and control group. Association of selected demographic variables with

There was a significant association between knowledge on self care management and selected demographic variables (education, occupation, monthly income)among patient with

All that is required is that a suitable graphical structure be defined that cap- tures the conditional independence assumptions that one would want to hold for the set of

Cummings and coauthors18 used Bayes’ theorem to develop a simple method for combining individual clinical features of patients with solitary pulmonary nodules (specifically the

Section I: Distribution of Sample according to the demographic variables among Experimental and control Groups. Section II: Percentage distribution of Level of knowledge, attitude

In this article, we study reliability measures such as geometric vitality function and conditional Shannon’s measures of uncertainty proposed by Ebrahimi (1996) and Sankaran and

Naive Bayes using kernel density estimation, radial basis function networks and support vector machines showed slight decrease in their accuracies after reaching certain