What is the probability of combining two incompatible events. Independence of events

P (A) = 1 - 0,3 = 0,7.

3. The theorem of addition of the probabilities of opposite events

Opposite call two incompatible events that form a complete group. If one of the two opposite events is denoted by A, then the other is usually denoted . Opposite event consists in non-occurrence of the event A.

Theorem. The sum of the probabilities of opposite events is equal to one:

P (A) + P ()= 1.

Example 4. The box contains 11 parts, 8 of which are standard. Find the probability that among the 3 randomly extracted parts there is at least one defective one.

Solution. The problem can be solved in two ways.

1 way... The events “among the extracted parts there is at least one defective one” and “among the extracted parts there is not a single defective one” are opposite. Let us denote the first event by A, and the second after :

P (A) = 1 - P ( ) .

Find R(). The total number of ways that 3 parts can be extracted from 11 parts is equal to the number of combinations
... The number of standard parts is 8 ; from this number of parts you can
ways to extract 3 standard parts. Therefore, the probability that there are no non-standard parts among the extracted 3 parts is equal to:

According to the theorem of addition of the probabilities of opposite events, the desired probability is equal to: P (A) = 1 - P ()=

Method 2. Event A- "among the extracted parts there is at least one defective" - ​​can be realized as the appearance of:

or events V- "1 defective and 2 non-defective parts were removed",

or events WITH- "2 defective and 1 non-defective parts were extracted",

or events D - "3 defective parts removed".

Then A= B+ C+ D... Since events B, C and D inconsistent, then the theorem of addition of the probabilities of inconsistent events can be applied:

4. The multiplication theorem for the probabilities of independent events

By the product of two eventsA andV call the event C= AB, consisting in the joint appearance (combination) of these events.

By the production of several events is called an event consisting in the joint appearance of all these events. For example, the event ABC consists in combining events A, B and WITH.

Two events are called independent if the probability of one of them does not depend on the appearance or non-appearance of the other.

Theorem. The probability of the joint occurrence of two independent events is equal to the product of the probabilities of these events:

P (AB) = P (A)P (B).

Consequence. The probability of the joint occurrence of several events, independent in the aggregate, is equal to the product of the probabilities of these events :

P (A 1 A 2 ... A n ) = P (A 1 ) P (A 2 ) ... P (A n ).

Example 5. Find the probability of the joint appearance of the coat of arms with one toss of two coins.

Solution... Let's designate events: A - the appearance of the coat of arms on the first coin, V - the appearance of the coat of arms on the second coin, WITH- the appearance of the coat of arms on two coins C = AB.

The probability of the appearance of the coat of arms of the first coin :

P (A) =.

The likelihood of the appearance of the coat of arms of the second coin :

P (B) =.

Since events A and V independent, then the desired probability by the multiplication theorem is:

P (C) = P (AB) = P (A)P (B) = =.

Example 6. There are 3 boxes containing 10 parts each. In the first box there are 8, in the second 7 and in the third 9 standard parts. One piece is taken out at random from each box. Find the probability that all three removed parts will be standard.

Solution... The probability that a standard part is removed from the first drawer (event A):

P (A) =

The probability that a standard part has been removed from the second drawer (event V):

The probability that a standard part has been removed from the third box (event WITH):

P (C) =

Since events A, B and WITH independent in the aggregate, then the desired probability (by the multiplication theorem) is equal:

P (ABC) = P (A)P (B)P (C) = 0,8 0,70,9 = 0,504.

Example 7. Probabilities of occurrence of each of two independent events A 1 and A 2 respectively equal R 1 and R 2. Find the probability of only one of these events occurring.

Solution... Let us introduce the notation of events:

V 1 only an event appeared A 1 ; V 2 only an event appeared A 2 .

Occurrence of an event V 1 tantamount to the occurrence of an event A 1 2 (the first event appeared and the second did not appear), i.e. V 1 = A 1 2 .

Occurrence of an event V 2 tantamount to the occurrence of an event 1 A 2 (the first event did not appear and the second appeared), i.e. V 1 = 1 A 2 .

Thus, to find the probability of occurrence of only one of the events A 1 or A 2 , it is enough to find the probability of the occurrence of one, no matter which of the events V 1 and V 2 ... Developments V 1 and V 2 are inconsistent, therefore, the theorem of addition of inconsistent events is applicable:

P (B 1 + B 2 ) = P (B 1 ) + P (B 2 ) .

When assessing the probability of the occurrence of any random event, it is very important to have a good preliminary understanding of whether the probability (probability of an event) of the occurrence of an event of interest to us depends on how other events develop. When classical scheme when all outcomes are equally probable, we can already estimate the values ​​of the probability of an individual event of interest to us independently. We can do this even if the event is a complex collection of several elementary outcomes. What if several random events occur simultaneously or sequentially? How does this affect the likelihood of the event of interest to us? If I roll a dice several times and want a six to come up, but I’m not lucky all the time, does that mean I need to increase the bet, because, according to the theory of probability, I’m about to get lucky? Alas, the theory of probability does not state anything of the kind. Neither dice, nor cards, nor coins can remember what they showed us last time. It doesn't matter to them at all whether for the first time or for the tenth time today I am testing my fate. Every time I repeat the throw, I only know one thing: and this time the probability of getting a six is ​​again equal to one-sixth. Of course, this does not mean that the number I need will never fall out. It only means that my loss after the first throw and after any other throw are independent events. Events A and B are called independent if the implementation of one of them does not affect the probability of another event in any way. For example, the probabilities of hitting a target with the first of two guns do not depend on whether the target was hit by another gun, so the events "the first gun hit the target" and "the second gun hit the target" are independent. If two events A and B are independent, and the probability of each of them is known, then the probability of simultaneous occurrence of both event A and event B (denoted by AB) can be calculated using the following theorem.

Multiplication theorem for probabilities for independent events

P (AB) = P (A) * P (B) the probability of the simultaneous occurrence of two independent events is equal to the product of the probabilities of these events.

Example 1... The probabilities of hitting the target when firing the first and second guns are respectively equal: p 1 = 0.7; p 2 = 0.8. Find the probability of hitting with one volley with both guns simultaneously.

as we have already seen events A (hit by the first gun) and B (hit by the second gun) are independent, i.e. P (AB) = P (A) * P (B) = p1 * p2 = 0.56. What happens to our estimates if the initiating events are not independent? Let's modify the previous example a bit.

Example 2. Two shooters in the competition shoot at targets, and if one of them shoots accurately, the opponent becomes nervous and his results deteriorate. How to turn this everyday situation into a mathematical problem and outline ways to solve it? It is intuitively clear that it is necessary to somehow separate the two scenarios for the development of events, to draw up essentially two scenarios, two different tasks. In the first case, if the opponent misses, the scenario will be favorable for the nervous athlete and his accuracy will be higher. In the second case, if the opponent has decently realized his chance, the probability of hitting the target for the second athlete is reduced. To separate possible scenarios (often called hypotheses) of events, we will often use a "probability tree" scheme. This scheme is similar in meaning to the decision tree, with which you probably already had to deal with. Each branch represents a separate scenario of the development of events, only now it has its own value of the so-called conditional probability (q 1, q 2, q 1 -1, q 2 -1).

This scheme is very convenient for analyzing sequential random events. It remains to clarify one more important question: where do the initial values ​​of probabilities in real situations come from? After all, not with the same coins and dice does probability theory work? Usually these estimates are taken from statistics, and when statistics are not available, we conduct our own research. And we often have to start it not with collecting data, but with the question of what information we generally need.

Example 3. Let's say we need to estimate the market size in a city with a population of one hundred thousand inhabitants for a new product that is not an essential item, for example, for a balm for the care of colored hair. Consider a "probability tree" scheme. In this case, the value of the probability on each "branch" we need to approximately estimate. So, our estimates of the market capacity:

1) 50% of all residents of the city are women,

2) of all women, only 30% dye their hair often,

3) only 10% of them use conditioners for colored hair,

4) of these, only 10% can get up the courage to try a new product,

5) 70% of them usually buy everything not from us, but from our competitors.


According to the law of multiplication of probabilities, we determine the probability of the event of interest to us A = (a resident of the city buys this new balm from us) = 0.00045. Let's multiply this value of probability by the number of inhabitants of the city. As a result, we have only 45 potential customers, and if we consider that one bubble of this money is enough for several months, the trade is not very lively. Still, there is some benefit from our assessments. Firstly, we can compare the forecasts of different business ideas, they will have different “forks” on the diagrams, and, of course, the probability values ​​will also be different. Secondly, as we have already said, a random variable is not called random because it does not depend on anything at all. It's just that its exact meaning is not known in advance. We know that the average number of buyers can be increased (for example, by advertising a new product). So it makes sense to focus our efforts on those “forks” where the probability distribution does not particularly suit us, on those factors that we are able to influence. Consider another quantitative example of shopping behavior research.

Example 3. On average, 10,000 people visit the food market per day. The probability that a market visitor enters a dairy pavilion is 1/2. It is known that in this pavilion, on average, 500 kg of various products are sold per day. Can we say that the average purchase in a pavilion weighs only 100 g?

Discussion.

Of course not. It is clear that not everyone who entered the pavilion ended up buying something there.


As shown in the diagram, in order to answer the question about the average purchase weight, we must find the answer to the question, what is the probability that a person who enters the pavilion will buy something there. If we do not have such data at our disposal, but we need them, we will have to obtain them ourselves, after observing the visitors of the pavilion for some time. Let's say our observations have shown that only one-fifth of the pavilion's visitors are buying something. As soon as we have received these estimates, the task becomes already simple. Out of 10,000 people who come to the market, 5,000 will enter the pavilion of dairy products, there will be only 1,000 purchases. The average purchase weight is 500 grams. It is interesting to note that in order to build a complete picture of what is happening, the logic of conditional "branching" must be defined at each stage of our reasoning as clearly as if we were working with a "specific" situation, and not with probabilities.

Tasks for self-testing.

1. Let there be an electrical circuit consisting of n series-connected elements, each of which works independently of the others. The probability p of failure of each element is known. Determine the probability of correct operation of the entire section of the chain (event A).


2. The student knows 20 of the 25 exam questions. Find the probability that the student knows the three questions suggested by the examiner.

3. Production consists of four successive stages, at each of which equipment operates, for which the probabilities of failure within the next month are equal to p 1, p 2, p 3 and p 4, respectively. Find the probability that there will be no production interruption due to equipment failure in a month.

The dependence of events is understood in probabilistic sense, not functional. This means that by the appearance of one of the dependent events it is impossible to unambiguously judge the appearance of the other. Probabilistic dependence means that the occurrence of one of the dependent events only changes the likelihood of the occurrence of another. If this does not change the probability, then the events are considered independent.

Definition: Let - an arbitrary probability space, - some random events. They say that event A does not depend on the event V , if its conditional probability coincides with the unconditional probability:

If, then they say that the event A depends on the event V.

The concept of independence is symmetric, that is, if an event A does not depend on the event V, then the event V does not depend on the event A... Indeed, let it be. Then . Therefore, they simply say that events A and V independent.

The following symmetric definition of the independence of events follows from the rule of multiplication of probabilities.

Definition: Developments A and V, defined on the same probability space are called independent, if

If then events A and V are called dependent.

Note that this definition is also true in the case when or .

Properties of independent events.

1. If events A and V are independent, then the following pairs of events are also independent:.

▲ Let us prove, for example, the independence of events. Imagine an event A as: . Since the events are incompatible, then, and due to the independence of events A and V we get that. Hence, which means independence. ■

2. If the event A does not depend on events IN 1 and IN 2 that are inconsistent () , that event A does not depend on the amount either.

▲ Indeed, using the axiom of additivity of probability and independence of the event A from events IN 1 and IN 2, we have:

The relationship between the concepts of independence and incompatibility.

Let be A and V- any events with non-zero probability:, so. If at the same time events A and V are inconsistent (), and therefore equality can never take place. Thus, inconsistent events are dependent.

When more than two events are considered at the same time, then their pairwise independence does not sufficiently characterize the connection between the events of the entire group. In this case, the concept of independence in the aggregate is introduced.

Definition: Events defined on the same probability space are called collectively independent if for any 2 £ m £ n and any combination of indices the equality is true:

At m = 2 independence in the aggregate implies the pairwise independence of events. The converse is not true.


Example. (Bernstein S.N.)

A random experiment consists in tossing a regular tetrahedron (tetrahedron). A facet is observed falling out from top to bottom. The faces of the tetrahedron are colored as follows: 1 face - white, 2 face - black,
3 face - red, 4 face - contains all colors.

Consider the events:

A= (Drop out white); B= (Drop out black);

C= (Drop out red).

Consequently, events A, V and WITH are pairwise independent.

But, .

Therefore events A, V and WITH are not collectively independent.

In practice, as a rule, the independence of events is not established by checking it by definition, but on the contrary: events are considered independent from any external considerations or taking into account the circumstances random experiment, and use independence to find the probabilities of the product of events.

Theorem (multiplication of probabilities for independent events).

If events defined on the same probability space are independent in the aggregate, then the probability of their product is equal to the product of the probabilities:

▲ The proof of the theorem follows from the definition of the independence of events in the aggregate or from the general theorem of multiplication of probabilities, taking into account the fact that

Example 1 (a typical example of finding conditional probabilities, the concept of independence, the theorem of addition of probabilities).

Electrical diagram consists of three independently operating elements. The failure probabilities of each of the elements are correspondingly equal.

1) Find the probability of circuit failure.

2) The circuit is known to have failed.

What is the likelihood that it refused:

a) 1st element; b) 3rd element?

Solution. Consider events = (Refused k th element), and the event A= (Scheme refused). Then the event A is presented in the form:

1) Since the events and are not inconsistent, the axiom of the additivity of probability P3) is inapplicable and to find the probability one should use the general theorem of addition of probabilities, according to which

Definitions of Probability

Classic definition

The classical "definition" of probability is based on the concept equal opportunities as an objective property of the studied phenomena. Equality is an undefined concept and is established from general considerations of the symmetry of the phenomena under study. For example, when tossing a coin, it is assumed that due to the assumed symmetry of the coin, the homogeneity of the material and the randomness (impartiality) of the toss, there is no reason to prefer "heads" over "heads" or vice versa, that is, the falling out of these sides can be considered equally possible (equally probable) ...

Along with the concept of equality of opportunity, in the general case, for the classical definition, the concept of an elementary event (outcome) is also necessary, favorable or not for the studied event A. We are talking about outcomes, the occurrence of which excludes the possibility of other outcomes occurring. These are incompatible elementary events. For example, when throwing a dice, the falling out of a specific number excludes the remaining numbers.

The classical definition of probability can be formulated as follows:

The probability of a random event A called the ratio of the number n incompatible equiprobable elementary events constituting an event A , to the number of all possible elementary events N :

For example, let's say two dice are tossed. The total number of equally possible outcomes (elementary events) is obviously 36 (6 possibilities on each die). Let's estimate the probability of getting 7 points. Getting 7 points is possible in the following ways: 1 + 6, 2 + 5, 3 + 4, 4 + 3, 5 + 2, 6 + 1. That is, there are only 6 equally possible outcomes favorable to event A - getting 7 points. Therefore, the probability will be 6/36 = 1/6. For comparison, the probability of getting 12 points or 2 points is only 1/36 - 6 times less.

Geometric definition

Despite the fact that the classical definition is intuitive and deduced from practice, at least it cannot be directly applied if the number of equally possible outcomes is infinite. A striking example of an infinite number of possible outcomes is a limited geometric region G, for example, on a plane with an area S. A randomly "thrown" "point" with equal probability can appear at any point in this region. The problem is to determine the probability of a point falling into some subdomain g with area s. In this case, generalizing the classical definition, one can come to a geometric definition of the probability of falling into a subdomain:

In view of the equal possibility, this probability does not depend on the shape of the region g, it depends only on its area. This definition can naturally be generalized to a space of any dimension, where the concept of "volume" is used instead of area. Moreover, it is precisely this definition that leads to the modern axiomatic definition of probability. The concept of volume is generalized to the concept of a "measure" of some abstract set, to which the requirements are imposed, which are also possessed by the "volume" in the geometric interpretation - first of all, these are non-negativity and additivity.

Frequency (statistical) determination

The classical definition, when considering complex problems, encounters difficulties of an insurmountable nature. In particular, in some cases, it may not be possible to identify equally possible cases. Even in the case of a coin, as is known, there is a clearly non-equiprobable possibility of the "edge" falling out, which cannot be estimated from theoretical considerations (one can only say that it is unlikely and that this consideration is rather practical). Therefore, even at the dawn of the theory of probability, an alternative "frequency" definition of probability was proposed. Namely, formally, the probability can be defined as the limit of the frequency of observations of event A, assuming homogeneity of observations (that is, the sameness of all observation conditions) and their independence from each other:

where is the number of observations, and is the number of occurrences of the event.

Despite the fact that this definition rather indicates a way to estimate an unknown probability - through a large number of homogeneous and independent observations - nevertheless, this definition reflects the content of the concept of probability. Namely, if a certain probability is attributed to an event as an objective measure of its possibility, then this means that under fixed conditions and multiple repetitions, we should get a frequency of its occurrence close to (the closer, the more observations). Actually, this is the initial meaning of the concept of probability. It is based on an objectivist view of natural phenomena. Below will be considered the so-called laws large numbers, which provide a theoretical basis (within the framework of the modern axiomatic approach presented below), including for the frequency estimation of probability.

Axiomatic definition

In the modern mathematical approach, the probability is set Kolmogorov's axioms... It is assumed that some elementary event space... Subsets of this space are interpreted as random events... The union (sum) of some subsets (events) is interpreted as an event consisting in the occurrence at least one of these events. The intersection (product) of subsets (events) is interpreted as an event consisting in the onset of all these events. Disjoint sets are interpreted as inconsistent events (their joint offensive is impossible). Accordingly, the empty set means impossible event.

Probability ( probabilistic measure) is called measure(numeric function) defined on a set of events with the following properties:

If the space of elementary events X Certainly, then the indicated additivity condition is sufficient for arbitrary two incompatible events, from which additivity will follow for any the final the number of inconsistent events. However, in the case of an infinite (countable or uncountable) space of elementary events, this condition turns out to be insufficient. The so-called countable or sigma additivity, that is, the fulfillment of the additivity property for any nothing more than countable families of pairwise incompatible events. This is necessary to ensure the "continuity" of the probability measure.

The probability measure cannot be determined for all subsets of the set. It is assumed that it is defined at some sigma algebra subsets ... These subsets are called measurable for a given probabilistic measure, and it is they that are random events. A set - that is, a set of elementary events, the sigma algebra of its subsets and a probability measure - is called probabilistic space.

Continuous random variables. In addition to discrete random variables, the possible values ​​of which form a finite or infinite sequence of numbers that do not completely fill any interval, there are often random variables, the possible values ​​of which form a certain interval. An example of such a random variable is a deviation from the nominal value of a certain size of a part with a properly adjusted technological process. Random variables of this kind cannot be specified using the probability distribution law p (x)... However, they can be specified using the probability distribution function F (x)... This function is defined in exactly the same way as in the case of a discrete random variable:

Thus, here, too, the function F (x) is defined on the entire numerical axis, and its value at the point NS is equal to the probability that the random variable will take a value less than NS... Formula (19) and properties 1 ° and 2 ° are valid for the distribution function of any random variable. The proof is carried out similarly to the case of a discrete quantity. The random variable is called continuous if for it there exists a nonnegative piecewise continuous function * satisfying for any values x equality

Based on the geometric meaning of the integral as an area, we can say that the probability of satisfying the inequalities is equal to the area of ​​a curvilinear trapezoid with a base bounded from above by a curve (Fig. 6).

Since, and on the basis of formula (22)

Note that for a continuous random variable, the distribution function F (x) continuous at any point NS where the function is continuous. This follows from the fact that F (x) at these points is differentiable. Based on formula (23), setting x 1 = x,, we have

By virtue of the continuity of the function F (x) we get that

Hence

Thus, the probability that a continuous random variable can take any particular value of x is zero... Hence it follows that the events involving the fulfillment of each of the inequalities

Have the same probability, i.e.

Indeed, for example,

because Comment. As we know, if an event is impossible, then the probability of its occurrence is zero. In the classical definition of probability, when the number of test outcomes is finite, the opposite proposition also takes place: if the probability of an event is zero, then the event is impossible, since in this case none of the test outcomes favors it. In the case of a continuous random variable, the number of its possible values ​​is infinite. The probability that this value will take on any particular value x 1 as we have seen, is equal to zero. However, it does not follow from this that this event is impossible, since as a result of the test, the random variable can, in particular, take on the value x 1 ... Therefore, in the case of a continuous random variable, it makes sense to speak about the probability of a random variable falling into an interval, and not about the probability that it will take some specific value. So, for example, in the manufacture of a roller, we are not interested in the probability that its diameter will be equal to the nominal value. What is important to us is the probability that the diameter of the bead does not go beyond the tolerance range. Example. The distribution density of a continuous random variable is given as follows:

The function graph is shown in Fig. 7. Determine the probability that the random variable will take a value that satisfies the inequalities. Find the distribution function of a given random variable. ( Solution)

The next two sections are devoted to distributions of continuous random variables, which are often encountered in practice - uniform and normal distributions.

* A function is called piecewise continuous on the entire numerical axis if it is either continuous on any segment or has a finite number of break points of the first kind. ** The differentiation rule for an integral with a variable upper bound, derived in the case of a finite lower bound, remains valid for integrals with an infinite lower bound. Indeed,

Since the integral

there is a constant value.

Dependent and independent events. Conditional probability

Distinguish between dependent and independent events. Two events are said to be independent if the occurrence of one of them does not change the likelihood of the occurrence of the other. For example, if two automatic lines are operating in a workshop, which are not interconnected by production conditions, then the stops of these lines are independent events.

Example 3. The coin is thrown twice. The probability of the appearance of the "coat of arms" in the first trial (event) does not depend on the appearance or not of the appearance of the "coat of arms" in the second trial (event). In turn, the probability of the appearance of the "coat of arms" in the second trial does not depend on the result of the first trial. Thus, events and independent.

Several events are called collectively independent if any of them does not depend on any other event and on any combination of the others.

Events are called dependent if one of them affects the likelihood of another. For example, two production units are linked by a single technological cycle. Then the probability of failure of one of them depends on the state of the other. The probability of one event, calculated on the assumption of the occurrence of another event, is called conditional probability events and is indicated.

The condition for the independence of the event from the event is written in the form, and the condition for its dependence - in the form. Let's consider an example of calculating the conditional probability of an event.

Example 4. The box contains 5 incisors: two worn and three new. Two successive incisors are extracted. Determine the conditional probability of the appearance of a worn cutter during the second extraction, provided that the cutter extracted for the first time does not return to the box.

Solution. Let's denote the extraction of a worn cutter in the first case, and - extraction of a new one. Then . Since the removed cutter is not returned to the box, the ratio between the number of worn and new cutters changes. Therefore, the probability of removing a worn cutter in the second case depends on what event took place before that.

Let's designate an event that means the removal of a worn cutter in the second case. The probabilities of this event can be as follows:

Therefore, the likelihood of an event depends on whether the event has occurred or not.

Density of probability- one of the ways to define a probability measure in Euclidean space. In the case when the probability measure is the distribution of a random variable, one speaks of densityrandom variable.

Probability Density Let be a probability measure on, that is, the probability space is defined, where denotes the Borel σ-algebra on. Let denote the Lebesgue measure on.

Definition 1. The probability is called absolutely continuous (with respect to the Lebesgue measure) () if any Borel set of zero Lebesgue measure also has a probability zero:

If the probability is absolutely continuous, then, according to the Radon-Nikodym theorem, there exists a nonnegative Borel function such that

,

where the common abbreviation is used , and the integral is understood in the sense of Lebesgue.

Definition 2. In a more general form, let is an arbitrary measurable space, and and are two measures on this space. If there is a non-negative one that allows expressing the measure in terms of the measure in the form

then such a function is called measure density as , or derivative of Radon-Nikodym measures relative to measure, and denote

Definition 1. Event A is called dependent on event B if the probability of occurrence of event A depends on whether or not event B has occurred. The probability that event A has occurred, provided that event B has occurred, will be denoted and called the conditional probability of event A subject to V.

Example 1. The urn contains 3 white balls and 2 black ones. One ball is taken out of the urn (first taking out), and then the second (second taking out). Event B - the appearance of a white ball at the first removal. Event A - the appearance of a white ball on the second removal.

Obviously, the probability of event A, if event B happened, will be

The probability of event L, provided that event B did not occur (when the black ball appeared at the first removal), will be

We see that

Theorem 1. The probability of combining two events is equal to the product of the probability of one of them by the conditional probability of the second, calculated under the condition that the first event occurred, i.e.

Proof. The proof will be given for events that reduce to the scheme of urns (i.e., in the case when the classical definition of probability is applicable).

Let the balls be in the urn, while white, black. Let among the white balls there are balls marked with an asterisk, the rest are pure white (Fig. 408).

One ball is taken out of the urn. What is the probability of the event taking out the white ball marked with an asterisk?

Let B be an event consisting in the appearance of a (white ball, A - an event consisting in the appearance of a ball marked with an asterisk. Obviously,

The probability of the appearance of a white ball with an "asterisk, provided that a white ball appears, will be

The probability of the appearance of a white ball with an asterisk is P (A and B). Obviously,

Substituting into (5) the left-hand sides of expressions (2), (3), and (4), we obtain

Equality (1) is proved.

If the events under consideration do not fit into the classical scheme, then formula (1) serves to determine the conditional probability. Namely, the conditional probability of event A, provided that event B occurs, is determined using

Note 1. Let's apply the last formula to the expression:

In equalities (1) and (6), the left-hand sides are equal, since this is the same probability, therefore, the right-hand sides are also equal. Therefore, we can write the equality

Example 2. For the case of Example 1, given at the beginning of this section, we have By formula (1) we obtain the Probability P (A and B) can be easily calculated and directly.

Example 3. The probability of manufacturing a suitable product by this machine is 0.9. The probability of the appearance of a 1st grade product among suitable products is 0.8. Determine the probability of manufacturing a product of the 1st grade with this machine.

Solution. Event B - production of a suitable product by this machine, event A - the appearance of a product of the 1st grade. Here, Substituting into formula (1), we obtain the desired probability

Theorem 2. If event A can occur only when one of the events that form a complete group of incompatible events occurs, then the probability of event A is calculated by the formula

Formulas (8) are called the total probability formula. Proof. Event A can occur when any of the combined events are executed.

Therefore, by the addition theorem for probabilities, we obtain

Replacing the terms on the right-hand side by formula (1), we obtain equality (8).

Example 4. Three consecutive shots were fired at the target. The probability of hitting the first shot with the second with the third With one hit, the probability of hitting the target with two hits, with three hits Determine the probability of hitting the target with three shots (event A).

Solution. Consider the complete group of inconsistent events:

There was one hit;

Solitaire Mat