domvpavlino.ru

Chance theory. The simplest concepts of probability theory. Classic definition of probability


Classification of events into possible, probable and random. Concepts of simple and complex elementary events. Operations on events. Classic definition of the probability of a random event and its properties. Elements of combinatorics in probability theory. Geometric probability. Axioms of probability theory.

Event classification

One of the basic concepts of probability theory is the concept of an event. Under event understand any fact that may occur as a result of an experience or test. Under experience, or test, refers to the implementation of a certain set of conditions.


Examples of events:

    – hitting the target when firing from a gun (experience - making a shot; event - hitting the target);
    – the loss of two emblems when throwing a coin three times (experience - throwing a coin three times; event - the loss of two emblems);
    – the appearance of a measurement error within specified limits when measuring the range to a target (experience - range measurement; event - measurement error).

Countless similar examples can be given. Events are indicated by capital letters of the Latin alphabet, etc.


Distinguish joint events And incompatible. Events are called joint if the occurrence of one of them does not exclude the occurrence of the other. Otherwise, the events are called incompatible. For example, two dice are tossed. The event is the loss of three points on the first die, the event is the loss of three points on the second die. and - joint events. Let the store receive a batch of shoes of the same style and size, but different colors. Event - a box taken at random will contain black shoes, an event - the box will contain brown shoes, and - incompatible events.


The event is called reliable, if it is sure to occur under the conditions of a given experiment.


An event is called impossible if it cannot occur under the conditions of a given experience. For example, the event that a standard part will be taken from a batch of standard parts is reliable, but a non-standard part is impossible.


The event is called possible, or random, if as a result of experience it may appear, but it may not appear. An example of a random event could be the identification of product defects during inspection of a batch of finished products, a discrepancy between the size of the processed product and the specified one, or the failure of one of the links in the automated control system.


The events are called equally possible, if, according to the test conditions, none of these events is objectively more possible than the others. For example, let a store be supplied with light bulbs (in equal quantities) by several manufacturing plants. Events involving the purchase of a light bulb from any of these factories are equally possible.


An important concept is full group of events. Several events in a given experiment form a complete group if at least one of them is sure to appear as a result of the experiment. For example, an urn contains ten balls, six of them are red, four are white, and five balls have numbers. - the appearance of a red ball during one draw, - the appearance of a white ball, - the appearance of a ball with a number. Events form a complete group of joint events.


Let us introduce the concept of an opposite, or additional, event. Under opposite An event is understood as an event that must necessarily occur if some event does not occur. Opposite events are incompatible and the only possible ones. They form a complete group of events. For example, if a batch of manufactured products consists of good and defective products, then when one product is removed, it may turn out to be either good - event , or defective - event .

Operations on events

When developing an apparatus and methodology for studying random events in probability theory, the concept of the sum and product of events is very important.


The sum, or union, of several events is an event consisting of the occurrence of at least one of these events.


The sum of events is indicated as follows:


For example, if an event is hitting the target with the first shot, an event - with the second, then the event is hitting the target in general, it does not matter with which shot - the first, second or both.


The product, or intersection, of several events is an event consisting of the joint occurrence of all these events.


The production of events is indicated


For example, if the event is that the target is hit with the first shot, the event is that the target is hit with the second shot, then the event is that the target was hit with both shots.


The concepts of sum and product of events have a clear geometric interpretation. Let the event consist of a point getting into the region , the event consists of getting into the region , then the event consists of the point getting into the region shaded in Fig. 1, and the event is when a point hits the area shaded in Fig. 2.


Classic definition of the probability of a random event

To quantitatively compare events according to the degree of possibility of their occurrence, a numerical measure is introduced, which is called the probability of an event.


The probability of an event is a number that expresses the measure of the objective possibility of the occurrence of an event.


The probability of an event will be denoted by the symbol.


The probability of an event is equal to the ratio of the number of cases favorable to it, out of the total number of uniquely possible, equally possible and incompatible cases, to the number i.e.



This is the classic definition of probability. Thus, to find the probability of an event, it is necessary, having considered the various outcomes of the test, to find a set of uniquely possible, equally possible and incompatible cases, calculate their total number, the number of cases favorable to a given event, and then perform the calculation using formula (1.1).


From formula (1.1) it follows that the probability of an event is a non-negative number and can vary from zero to one depending on the proportion of the favorable number of cases from the total number of cases:


Properties of Probability

Property 1. If all cases are favorable for a given event, then this event is sure to occur. Consequently, the event in question is reliable, and the probability of its occurrence is , since in this case



Property 2. If there is not a single case favorable for a given event, then this event cannot occur as a result of experience. Consequently, the event in question is impossible, and the probability of its occurrence is , since in this case:



Property 3. The probability of the occurrence of events that form a complete group is equal to one.


Property 4. The probability of the occurrence of the opposite event is determined in the same way as the probability of the occurrence of the event:



where is the number of cases favorable to the occurrence of the opposite event. Hence the probability of the opposite event occurring is equal to the difference between unity and the probability of the event occurring:



An important advantage of the classical definition of the probability of an event is that with its help the probability of an event can be determined without resorting to experience, but based on logical reasoning.

Example 1. While dialing a phone number, the subscriber forgot one digit and dialed it at random. Find the probability that the correct number is dialed.


Solution. Let us denote the event that the required number is dialed. The subscriber could dial any of the 10 digits, so the total number of possible outcomes is 10. These outcomes are the only possible (one of the digits must be dialed) and equally possible (the digit is dialed at random). Only one outcome favors the event (there is only one required number). The required probability is equal to the ratio of the number of outcomes favorable to the event to the number of all outcomes:


Elements of combinatorics

In probability theory, placements, permutations and combinations are often used. If a set is given, then placement (combination) of the elements by is any ordered (unordered) subset of the elements of the set. When placed is called rearrangement from elements.


Let, for example, be given a set. The placements of the three elements of this set of two are , , , , , ; combinations - , , .


Two combinations differ in at least one element, and placements differ either in the elements themselves or in the order in which they appear. The number of combinations of elements by is calculated by the formula



is the number of placements of elements by ; - number of permutations of elements.

Example 2. In a batch of 10 parts there are 7 standard ones. Find the probability that among 6 parts taken at random there are exactly 4 standard ones.


Solution. The total number of possible test outcomes is equal to the number of ways in which 6 parts can be extracted from 10, i.e., equal to the number of combinations of 10 elements of 6. The number of outcomes favorable to the event (among the 6 taken parts there are exactly 4 standard ones) is determined as follows: 4 standard parts can be taken from 7 standard parts in different ways; in this case, the remaining parts must be non-standard; There are ways to take 2 non-standard parts from non-standard parts. Therefore, the number of favorable outcomes is equal to . The initial probability is equal to the ratio of the number of outcomes favorable to the event to the number of all outcomes:


Statistical definition of probability

Formula (1.1) is used to directly calculate the probabilities of events only when experience is reduced to a pattern of cases. In practice, the classical definition of probability is often not applicable for two reasons: first, the classical definition of probability assumes that the total number of cases must be finite. In fact, it is often not limited. Secondly, it is often impossible to present the outcomes of an experiment in the form of equally possible and incompatible events.


The frequency of occurrence of events during repeated Experiments tends to stabilize around some constant value. Thus, a certain constant value can be associated with the event under consideration, around which frequencies are grouped and which is a characteristic of the objective connection between the set of conditions under which experiments are carried out and the event.


The probability of a random event is the number around which the frequencies of this event are grouped as the number of trials increases.


This definition of probability is called statistical.


The advantage of the statistical method of determining probability is that it is based on a real experiment. However, its significant drawback is that to determine the probability it is necessary to perform a large number of experiments, which are very often associated with material costs. The statistical determination of the probability of an event, although it quite fully reveals the content of this concept, does not make it possible to actually calculate the probability.

The classical definition of probability considers the complete group of a finite number of equally possible events. In practice, very often the number of possible test outcomes is infinite. In such cases, the classical definition of probability is not applicable. However, sometimes in such cases you can use another method of calculating probability. For definiteness, we restrict ourselves to the two-dimensional case.


Let a certain region of area , which contains another region of area, be given on the plane (Fig. 3). A dot is thrown into the area at random. What is the probability that a point will fall into the region? It is assumed that a point thrown at random can hit any point in the region, and the probability of hitting any part of the region is proportional to the area of ​​the part and does not depend on its location and shape. In this case, the probability of hitting the area when throwing a point at random into the area is



Thus, in the general case, if the possibility of a random appearance of a point inside a certain area on a line, plane or in space is determined not by the position of this area and its boundaries, but only by its size, i.e. length, area or volume, then the probability of a random point falling inside a certain region is defined as the ratio of the size of this region to the size of the entire region in which a given point can appear. This is the geometric definition of probability.


Example 3. A round target rotates at a constant angular velocity. One fifth of the target is painted green, and the rest is white (Fig. 4). A shot is fired at the target in such a way that hitting the target is a reliable event. You need to determine the probability of hitting the target sector colored green.


Solution. Let’s denote “the shot hit the sector colored green.” Then . The probability is obtained as the ratio of the area of ​​the part of the target painted green to the entire area of ​​the target, since hits on any part of the target are equally possible.

Axioms of probability theory

From the statistical definition of the probability of a random event it follows that the probability of an event is the number around which the frequencies of this event observed experimentally are grouped. Therefore, the axioms of probability theory are introduced so that the probability of an event has the basic properties of frequency.


Axiom 1. Each event corresponds to a certain number that satisfies the condition and is called its probability.

When a coin is tossed, we can say that it will land heads up, or probability this is 1/2. Of course, this does not mean that if a coin is tossed 10 times, it will necessarily land on heads 5 times. If the coin is "fair" and if it is tossed many times, then heads will land very close half the time. Thus, there are two types of probabilities: experimental And theoretical .

Experimental and theoretical probability

If we flip a coin a large number of times - say 1000 - and count how many times it lands on heads, we can determine the probability that it lands on heads. If heads are thrown 503 times, we can calculate the probability of it landing:
503/1000, or 0.503.

This experimental definition of probability. This definition of probability comes from observation and study of data and is quite common and very useful. Here, for example, are some probabilities that were determined experimentally:

1. The probability that a woman will develop breast cancer is 1/11.

2. If you kiss someone who has a cold, then the probability that you will also get a cold is 0.07.

3. A person who has just been released from prison has an 80% chance of returning to prison.

If we consider tossing a coin and taking into account that it is just as likely that it will come up heads or tails, we can calculate the probability of getting heads: 1/2. This is a theoretical definition of probability. Here are some other probabilities that have been determined theoretically using mathematics:

1. If there are 30 people in a room, the probability that two of them have the same birthday (excluding year) is 0.706.

2. During a trip, you meet someone, and during the conversation you discover that you have a mutual friend. Typical reaction: “This can’t be!” In fact, this phrase is not suitable, because the probability of such an event is quite high - just over 22%.

Thus, experimental probabilities are determined through observation and data collection. Theoretical probabilities are determined through mathematical reasoning. Examples of experimental and theoretical probabilities, such as those discussed above, and especially those that we do not expect, lead us to the importance of studying probability. You may ask, "What is true probability?" In fact, there is no such thing. Probabilities within certain limits can be determined experimentally. They may or may not coincide with the probabilities that we obtain theoretically. There are situations in which it is much easier to determine one type of probability than another. For example, it would be sufficient to find the probability of catching a cold using theoretical probability.

Calculation of experimental probabilities

Let us first consider the experimental definition of probability. The basic principle we use to calculate such probabilities is as follows.

Principle P (experimental)

If in an experiment in which n observations are made, a situation or event E occurs m times in n observations, then the experimental probability of the event is said to be P (E) = m/n.

Example 1 Sociological survey. An experimental study was conducted to determine the number of left-handed people, right-handed people and people whose both hands are equally developed. The results are shown in the graph.

a) Determine the probability that the person is right-handed.

b) Determine the probability that the person is left-handed.

c) Determine the probability that a person is equally fluent in both hands.

d) Most Professional Bowling Association tournaments are limited to 120 players. Based on the data from this experiment, how many players could be left-handed?

Solution

a)The number of people who are right-handed is 82, the number of left-handers is 17, and the number of those who are equally fluent in both hands is 1. The total number of observations is 100. Thus, the probability that a person is right-handed is P
P = 82/100, or 0.82, or 82%.

b) The probability that a person is left-handed is P, where
P = 17/100, or 0.17, or 17%.

c) The probability that a person is equally fluent in both hands is P, where
P = 1/100, or 0.01, or 1%.

d) 120 bowlers, and from (b) we can expect that 17% are left-handed. From here
17% of 120 = 0.17.120 = 20.4,
that is, we can expect about 20 players to be left-handed.

Example 2 Quality control . It is very important for a manufacturer to keep the quality of its products at a high level. In fact, companies hire quality control inspectors to ensure this process. The goal is to produce the minimum possible number of defective products. But since the company produces thousands of products every day, it cannot afford to test every product to determine whether it is defective or not. To find out what percentage of products are defective, the company tests far fewer products.
The USDA requires that 80% of the seeds sold by growers must germinate. To determine the quality of the seeds that an agricultural company produces, 500 seeds from those that were produced are planted. After this, it was calculated that 417 seeds sprouted.

a) What is the probability that the seed will germinate?

b) Do the seeds meet government standards?

Solution a) We know that out of 500 seeds that were planted, 417 sprouted. Probability of seed germination P, and
P = 417/500 = 0.834, or 83.4%.

b) Since the percentage of seeds germinated has exceeded 80% as required, the seeds meet government standards.

Example 3 Television ratings. According to statistics, there are 105,500,000 households with televisions in the United States. Every week, information about viewing programs is collected and processed. In one week, 7,815,000 households tuned in to the hit comedy series "Everybody Loves Raymond" on CBS and 8,302,000 households tuned in to the hit series "Law & Order" on NBC (Source: Nielsen Media Research). What is the probability that one household's TV is tuned to "Everybody Loves Raymond" during a given week? to "Law & Order"?

Solution The probability that the TV in one household is tuned to "Everybody Loves Raymond" is P, and
P = 7,815,000/105,500,000 ≈ 0.074 ≈ 7.4%.
The chance that the household's TV was tuned to Law & Order is P, and
P = 8,302,000/105,500,000 ≈ 0.079 ≈ 7.9%.
These percentages are called ratings.

Theoretical probability

Suppose we are conducting an experiment, such as throwing a coin or darts, drawing a card from a deck, or testing products for quality on an assembly line. Each possible result of such an experiment is called Exodus . The set of all possible outcomes is called outcome space . Event it is a set of outcomes, that is, a subset of the space of outcomes.

Example 4 Throwing darts. Suppose that in a dart throwing experiment, a dart hits a target. Find each of the following:

b) Outcome space

Solution
a) The outcomes are: hitting black (B), hitting red (R) and hitting white (B).

b) The space of outcomes is (hitting black, hitting red, hitting white), which can be written simply as (H, K, B).

Example 5 Throwing dice. A die is a cube with six sides, each with one to six dots on it.


Suppose we are throwing a die. Find
a) Outcomes
b) Outcome space

Solution
a) Outcomes: 1, 2, 3, 4, 5, 6.
b) Outcome space (1, 2, 3, 4, 5, 6).

We denote the probability that an event E occurs as P(E). For example, “the coin will land on heads” can be denoted by H. Then P(H) represents the probability that the coin will land on heads. When all outcomes of an experiment have the same probability of occurring, they are said to be equally likely. To see the differences between events that are equally likely and events that are not, consider the target shown below.

For target A, the events of hitting black, red and white are equally probable, since the black, red and white sectors are the same. However, for target B, the zones with these colors are not the same, that is, hitting them is not equally probable.

Principle P (Theoretical)

If an event E can happen in m ways out of n possible equally probable outcomes from the outcome space S, then theoretical probability events, P(E) is
P(E) = m/n.

Example 6 What is the probability of rolling a die to get a 3?

Solution There are 6 equally probable outcomes on a dice and there is only one possibility of rolling the number 3. Then the probability P will be P(3) = 1/6.

Example 7 What is the probability of rolling an even number on a die?

Solution The event is the throwing of an even number. This can happen in 3 ways (if you roll a 2, 4 or 6). The number of equally probable outcomes is 6. Then the probability P(even) = 3/6, or 1/2.

We will use a number of examples involving a standard 52 card deck. This deck consists of the cards shown in the figure below.

Example 8 What is the probability of drawing an Ace from a well-shuffled deck of cards?

Solution There are 52 outcomes (the number of cards in the deck), they are equally likely (if the deck is well shuffled), and there are 4 ways to draw an Ace, so according to the P principle, the probability
P(draw an ace) = 4/52, or 1/13.

Example 9 Suppose we choose, without looking, one ball from a bag with 3 red balls and 4 green balls. What is the probability of choosing a red ball?

Solution There are 7 equally probable outcomes of drawing any ball, and since the number of ways to draw a red ball is 3, we get
P(red ball selection) = 3/7.

The following statements are results from Principle P.

Properties of Probability

a) If event E cannot happen, then P(E) = 0.
b) If event E is certain to happen then P(E) = 1.
c) The probability that event E will occur is a number from 0 to 1: 0 ≤ P(E) ≤ 1.

For example, in a coin toss, the event that the coin lands on its edge has zero probability. The probability that a coin is either heads or tails has a probability of 1.

Example 10 Let's assume that 2 cards are drawn from a 52-card deck. What is the probability that both of them are peaks?

Solution The number n of ways to draw 2 cards from a well-shuffled deck of 52 cards is 52 C 2 . Since 13 of the 52 cards are spades, the number of ways m to draw 2 spades is 13 C 2 . Then,
P(pulling 2 peaks) = m/n = 13 C 2 / 52 C 2 = 78/1326 = 1/17.

Example 11 Suppose 3 people are randomly selected from a group of 6 men and 4 women. What is the probability that 1 man and 2 women will be selected?

Solution The number of ways to select three people from a group of 10 people is 10 C 3. One man can be chosen in 6 C 1 ways, and 2 women can be chosen in 4 C 2 ways. According to the fundamental principle of counting, the number of ways to choose 1 man and 2 women is 6 C 1. 4 C 2 . Then, the probability that 1 man and 2 women will be selected is
P = 6 C 1 . 4 C 2 / 10 C 3 = 3/10.

Example 12 Throwing dice. What is the probability of rolling a total of 8 on two dice?

Solution Each dice has 6 possible outcomes. The outcomes are doubled, meaning there are 6.6 or 36 possible ways in which the numbers on the two dice can appear. (It’s better if the cubes are different, say one is red and the other is blue - this will help visualize the result.)

The pairs of numbers that add up to 8 are shown in the figure below. There are 5 possible ways to obtain a sum equal to 8, hence the probability is 5/36.

Probability theory is a branch of mathematics that studies the patterns of random phenomena: random events, random variables, their properties and operations on them.

For a long time, probability theory did not have a clear definition. It was formulated only in 1929. The emergence of probability theory as a science dates back to the Middle Ages and the first attempts at mathematical analysis of gambling (flake, dice, roulette). French mathematicians of the 17th century Blaise Pascal and Pierre Fermat, while studying the prediction of winnings in gambling, discovered the first probabilistic patterns that arise when throwing dice.

Probability theory arose as a science from the belief that mass random events are based on certain patterns. Probability theory studies these patterns.

Probability theory deals with the study of events whose occurrence is not known with certainty. It allows you to judge the degree of probability of the occurrence of some events compared to others.

For example: it is impossible to determine unambiguously the result of “heads” or “tails” as a result of tossing a coin, but with repeated tossing, approximately the same number of “heads” and “tails” appear, which means that the probability that “heads” or “tails” will fall ", is equal to 50%.

Test in this case, the implementation of a certain set of conditions is called, that is, in this case, the toss of a coin. The challenge can be played an unlimited number of times. In this case, the set of conditions includes random factors.

The test result is event. The event happens:

  1. Reliable (always occurs as a result of testing).
  2. Impossible (never happens).
  3. Random (may or may not occur as a result of the test).

For example, when tossing a coin, an impossible event - the coin will land on its edge, a random event - the appearance of “heads” or “tails”. The specific test result is called elementary event. As a result of the test, only elementary events occur. The set of all possible, different, specific test outcomes is called space of elementary events.

Basic concepts of the theory

Probability- the degree of possibility of the occurrence of an event. When the reasons for some possible event to actually occur outweigh the opposite reasons, then this event is called probable, otherwise - unlikely or improbable.

Random value- this is a quantity that, as a result of testing, can take one or another value, and it is not known in advance which one. For example: number per fire station per day, number of hits with 10 shots, etc.

Random variables can be divided into two categories.

  1. Discrete random variable is a quantity that, as a result of testing, can take on certain values ​​with a certain probability, forming a countable set (a set whose elements can be numbered). This set can be either finite or infinite. For example, the number of shots before the first hit on the target is a discrete random variable, because this quantity can take on an infinite, albeit countable, number of values.
  2. Continuous random variable is a quantity that can take any value from some finite or infinite interval. Obviously, the number of possible values ​​of a continuous random variable is infinite.

Probability space- concept introduced by A.N. Kolmogorov in the 30s of the 20th century to formalize the concept of probability, which gave rise to the rapid development of probability theory as a strict mathematical discipline.

A probability space is a triple (sometimes enclosed in angle brackets: , where

This is an arbitrary set, the elements of which are called elementary events, outcomes or points;
- sigma algebra of subsets called (random) events;
- probability measure or probability, i.e. sigma-additive finite measure such that .

De Moivre-Laplace theorem- one of the limit theorems of probability theory, established by Laplace in 1812. It states that the number of successes when repeating the same random experiment over and over again with two possible outcomes is approximately normally distributed. It allows you to find an approximate probability value.

If for each of the independent trials the probability of the occurrence of some random event is equal to () and is the number of trials in which it actually occurs, then the probability of the inequality being true is close (for large values) to the value of the Laplace integral.

Distribution function in probability theory- a function characterizing the distribution of a random variable or random vector; the probability that a random variable X will take a value less than or equal to x, where x is an arbitrary real number. If known conditions are met, it completely determines the random variable.

Expected value- the average value of a random variable (this is the probability distribution of a random variable, considered in probability theory). In English-language literature it is denoted by , in Russian - . In statistics, the notation is often used.

Let a probability space and a random variable defined on it be given. That is, by definition, a measurable function. Then, if there is a Lebesgue integral of over space, then it is called the mathematical expectation, or the mean value, and is denoted .

Variance of a random variable- a measure of the spread of a given random variable, i.e. its deviation from the mathematical expectation. It is designated in Russian and foreign literature. In statistics, the notation or is often used. The square root of the variance is called the standard deviation, standard deviation, or standard spread.

Let be a random variable defined on some probability space. Then

where the symbol denotes the mathematical expectation.

In probability theory, two random events are called independent, if the occurrence of one of them does not change the probability of the occurrence of the other. Similarly, two random variables are called dependent, if the value of one of them affects the probability of the values ​​of the other.

The simplest form of the law of large numbers is Bernoulli's theorem, which states that if the probability of an event is the same in all trials, then as the number of trials increases, the frequency of the event tends to the probability of the event and ceases to be random.

The law of large numbers in probability theory states that the arithmetic mean of a finite sample from a fixed distribution is close to the theoretical mean of that distribution. Depending on the type of convergence, a distinction is made between the weak law of large numbers, when convergence occurs by probability, and the strong law of large numbers, when convergence is almost certain.

The general meaning of the law of large numbers is that the joint action of a large number of identical and independent random factors leads to a result that, in the limit, does not depend on chance.

Methods for estimating probability based on finite sample analysis are based on this property. A clear example is the forecast of election results based on a survey of a sample of voters.

Central limit theorems- a class of theorems in probability theory stating that the sum of a sufficiently large number of weakly dependent random variables that have approximately the same scales (none of the terms dominates or makes a determining contribution to the sum) has a distribution close to normal.

Since many random variables in applications are formed under the influence of several weakly dependent random factors, their distribution is considered normal. In this case, the condition must be met that none of the factors is dominant. Central limit theorems in these cases justify the use of the normal distribution.

Probability theory is a mathematical science that studies patterns in mass random phenomena.

Before the emergence of probability theory as a generally accepted theory, determinism dominated in science, according to which the implementation of a certain set of conditions uniquely determines the result. The classic example is mechanics. For example, based on the laws of celestial mechanics, solar and lunar eclipses can be very accurately predicted from the known positions of the planets in the solar system at some moment. Such laws are called deterministic laws.

However, practice has shown that this approach is not always applicable. Not all phenomena of the macrocosm can be accurately predicted, despite the fact that our knowledge about it is continuously refined and deepened. The laws and regularities of the microworld are even less determined.

The mathematical laws of probability theory reflect real statistical laws that objectively exist in mass random phenomena.

Probability theory developed initially as an applied discipline. In this regard, its concepts and conclusions were colored by the areas of knowledge in which they were obtained.

In the works of B.V. Gnedenko, L.E. Maystrova, A.N. Kolmogorov presents the main stages in the development of probability theory. For brevity, we present them in table form.

Table 1

Stages of development of probability theory

Stage name

Basic Concepts

Sources of formation and development

Prehistory of probability theory, until the end of the 16th century

Equally possible (equally probable) outcomes, the principle - “no more one way than another”, probabilistic knowledge, probabilistic reasoning

Solving elementary problems, philosophy, gambling

The emergence of probability theory as a science, from the 17th century to the beginning of the 18th century.

Quantitative assessment of the possibility of a random event occurring, ideas about the frequency of an event, mathematical expectation and theorems of addition and multiplication, combinatorics formulas

Demography, insurance business, assessment of observation errors.

The period of formation of the foundations of probability theory, from 1713 to the middle of the 19th century

Classical and statistical definitions of probability, geometric probabilities, addition and multiplication theorems, law of large numbers, mathematical expectation, Bernoulli's formula, Bayes' theorem, random variable

Demography, insurance, assessment of observation errors, natural science

Russian - St. Petersburg school, from the second half of the 19th century to the 20th century

Limit theorems, theory of random processes, generalization of the law of large numbers, method of moments

Product quality control, natural science, etc.

The current stage of development of probability theory, XX - XXI centuries

Axiomatic construction of probability theory, frequency interpretation of probability, stationary random processes, etc.

The internal needs of mathematics itself, statistical physics, information theory, theory of random processes, astronomy, biology, genetics, etc.

The sources of development presented in the table reflect the needs of practice, which became the impetus for the development of probability theory.

By the 17th century, philosophy had accumulated quite a wealth of material, which influenced the origin and first period of development of probability theory. The main source of the emergence of probability theory is practice. The need to create a mathematical apparatus for the analysis of random phenomena arose from the needs of processing and generalization of statistical material. However, the theory of probability was formed not only on the basis of practical problems: these problems are too complex. Gambling turned out to be a simpler and more convenient material for studying the patterns of random phenomena. On the basis of gambling, along with the basic concepts, methods of probability theory were also developed.

The origin of probability theory began with the fact that the courtier of the French king, Chevalier (Cavalier) de Mere (1607-1648), himself a gambler, turned to the French physicist, mathematician and philosopher Blaise Pascal (1623-1662) with questions about the problem of glasses. Two famous questions from De Mere to Pascal have reached us: 1) how many times must two dice be thrown so that the number of times two sixes fall out at once is more than half of the total number of throws; 2) how to fairly divide the money at stake if the players stopped the game prematurely? Pascal turned to the mathematician Pierre Fermat (1601-1665) and corresponded with him about these problems. Together they established some of the initial principles of probability theory, in particular they came to the concept of mathematical expectation and the theorems of addition and multiplication of probabilities.

Probabilistic methods have found direct practical application, first of all, in insurance problems. Since then, probability theory has found increasing application in various fields.

The French scientists B. Pascal and P. Fermat and the Dutch scientist H. Huygens (1629-1695) are considered the discoverers of probability theory. A new science began to emerge, its specifics and methodology began to emerge: definitions, theorems, methods.

A major step in the development of probability theory is associated with the work of Jacob Bernoulli (1654–1705). Was he the first proof of one of the most important provisions of probability theory? law of large numbers. Even before Jacob Bernoulli, many noted as an empirical fact that feature of random phenomena, which is called “the property of stability of frequencies over a large number of experiments.” It has been repeatedly noted that with a large number of experiments, the outcome of each of which is random, the relative frequency of occurrence of a given outcome tends to stabilize, approaching a certain number? the probability of this outcome. Jacob Bernoulli was the first to give a theoretical justification for this empirical fact. Jacob Bernoulli's theorem? simplest form of the law of large numbers? establishes a connection between the probability of an event and the frequency of its occurrence; with a sufficiently large number of experiments, one can, with practical certainty, expect an arbitrarily close agreement between the frequency and the probability.

Another important stage in the development of probability theory is associated with the name of Moavr (1667?1754). This scientist first introduced into consideration and for the simplest case justified a law that is very often observed in random phenomena: the so-called normal law (Gauss's law).

The normal law plays an extremely important role in random phenomena. The theorems that justify this law for certain conditions are in probability theory generally called the “central limit theorem.”

A harmonious and systematic presentation of the foundations of probability theory was first given by the famous mathematician Laplace (1749–1827). He proved one of the forms of the central limit theorem (the Moavre-Laplace theorem) and developed a number of remarkable applications of probability theory to practical issues, in particular, to the analysis of observational and measurement errors.

A significant step forward in the development of probability theory is associated with the name of Gauss (1777–1855), who gave an even more general justification for the normal law and developed a method for processing experimental data known as the “least squares method”.

It is worth noting the work of Poisson (1781–1840), who proved a more general form of the law of large numbers than Jacob Bernoulli, and was also the first to apply the theory of probability to shooting problems. The name of Poisson is associated with one of the laws of distribution, which plays an important role in probability theory and its applications.

The entire 18th and early 19th centuries were characterized by the rapid development of probability theory and widespread enthusiasm for it. Probability theory is becoming a “fashionable” science. They are beginning to use it not only where its use is legal, but also where it is not justified in any way.

This period was characterized by numerous attempts to apply the theory of probability to the study of social phenomena, to the so-called “moral” or “moral” sciences. Numerous works appeared on issues of legal proceedings, history, politics, even theology, in which the apparatus of probability theory was used. All these pseudoscientific studies are characterized by an extremely simplified, mechanical approach to the social phenomena considered in them. The reasoning is based on some arbitrarily given probabilities (for example, when considering issues of legal proceedings, the propensity of each person to tell the truth or lie is estimated by some constant probability, the same for all people), and then the social problem is solved as a simple arithmetic problem.

Naturally, all such attempts were doomed to failure and could not play a positive role in the development of science. On the contrary, their indirect result was that around the twenties? In the thirties of the 19th century in Western Europe, widespread enthusiasm for probability theory gave way to disappointment and skepticism. They began to look at the theory of probability as a dubious, second-rate science, a kind of mathematical entertainment, hardly worthy of serious study.

It is remarkable that it was at this time that the famous St. Petersburg mathematical school was created in Russia, through whose works the theory of probability was placed on a solid logical and mathematical basis and made a reliable, accurate and effective method of knowledge. Since the appearance of this school, the development of probability theory has already been closely connected with the work of Russians, and in the future? Soviet scientists.

Among the scientists of the St. Petersburg mathematical school, one should name V. Ya. Bunyakovsky (1804?1889)? author of the first course in probability theory in Russian, creator of modern Russian terminology in probability theory, author of original research in the field of statistics and demography.

A student of V. Ya. Bunyakovsky was the great Russian mathematician P. L. Chebyshev (1821?1894), who further expanded and generalized the law of large numbers. In addition, P. L. Chebyshev introduced a very powerful and fruitful method of moments into probability theory.

A student of P. L. Chebyshev was A. A. Markov (1856?1922), who significantly expanded the scope of application of the law of large numbers and the central limit theorem, extending them not only to independent, but also to dependent experiments. The most important merit of A. A. Markov was that he laid the foundations of a completely new branch of probability theory? theories of random, or “stochastic” processes. The development of this theory constitutes the main content of the newest, modern theory of probability.

A. M. Lyapunov (1857–1918), whose name is associated with the first proof of the central limit theorem under extremely general conditions, was also a student of P. L. Chebyshev. To prove his theorem, A. M. Lyapunov developed a special method of characteristic functions, widely used in modern probability theory.

A characteristic feature of the work of the St. Petersburg mathematical school was the exceptional clarity of the formulation of problems, the complete mathematical rigor of the methods used, and at the same time the close connection of theory with the immediate requirements of practice. Through the works of scientists of the St. Petersburg mathematical school, probability theory was brought out from the margins of science and placed as a full member of the exact mathematical sciences. The conditions for the application of her methods were strictly defined, and the methods themselves were brought to a high degree of perfection.

The Soviet school of probability theory, having inherited the traditions of the St. Petersburg mathematical school, occupies a leading place in world science. Let us name only some of the largest Soviet scientists whose works played a decisive role in the development of modern probability theory and its practical applications.

S. N. Bernstein developed the first complete axiomatics of probability theory, and also significantly expanded the scope of application of limit theorems.

A. Ya. Khinchin (1894?1959) is known for his research in the field of further generalization and strengthening of the law of large numbers, but mainly for his research in the field of stationary random processes.

A number of the most important fundamental works in various fields of probability theory and mathematical statistics belong to A. N. Kolmogorov. He gave the most perfect axiomatic construction of the theory of probability, connecting it with one of the most important branches of modern mathematics? metric theory of functions. The work of A. N. Kolmogorov is of particular importance in the field of the theory of random functions (stochastic processes), which currently form the basis of all research in this area. The works of A. N. Kolmogorov related to the assessment of effectiveness formed the basis of a whole new scientific direction in the theory of shooting, which then grew into a broader science of the effectiveness of combat operations.

V. I. Romanovsky and N. V. Smirnov are known for their work in the field of mathematical statistics, E. E. Slutsky? in the theory of random processes, B.V. Gnedenko? in the field of queuing theory, E. B. Dynkin? in the field of Markov random processes, V. S. Pugachev? in the field of random processes as applied to automatic control problems.

The development of foreign probability theory is currently also proceeding at an accelerated pace due to the urgent requirements of practice. As with us, priority attention is paid to questions related to random processes. Significant works in this area belong to N. Wiener, V. Feller, D. Doob. Important works on probability theory and mathematical statistics belong to R. Fischer, D. Neumann and G. Cramer.

Probability theory, like other branches of mathematics, developed from the needs of practice, and abstractly it reflects patterns in mass random events. These patterns play a very important role in various fields of natural science, medicine, technology, economics, and military affairs. Many branches of probability theory were developed due to the needs of practice.

“Accidents are not accidental”... It sounds like something a philosopher said, but in fact, studying randomness is the destiny of the great science of mathematics. In mathematics, chance is dealt with by probability theory. Formulas and examples of tasks, as well as the main definitions of this science will be presented in the article.

What is probability theory?

Probability theory is one of the mathematical disciplines that studies random events.

To make it a little clearer, let's give a small example: if you throw a coin up, it can land on heads or tails. While the coin is in the air, both of these probabilities are possible. That is, the probability of possible consequences is 1:1. If one is drawn from a deck of 36 cards, then the probability will be indicated as 1:36. It would seem that there is nothing to explore and predict here, especially with the help of mathematical formulas. However, if you repeat a certain action many times, you can identify a certain pattern and, based on it, predict the outcome of events in other conditions.

To summarize all of the above, probability theory in the classical sense studies the possibility of the occurrence of one of the possible events in a numerical value.

From the pages of history

The theory of probability, formulas and examples of the first tasks appeared in the distant Middle Ages, when attempts to predict the outcome of card games first arose.

Initially, probability theory had nothing to do with mathematics. It was justified by empirical facts or properties of an event that could be reproduced in practice. The first works in this area as a mathematical discipline appeared in the 17th century. The founders were Blaise Pascal and Pierre Fermat. They studied gambling for a long time and saw certain patterns, which they decided to tell the public about.

The same technique was invented by Christiaan Huygens, although he was not familiar with the results of the research of Pascal and Fermat. The concept of “probability theory”, formulas and examples, which are considered the first in the history of the discipline, were introduced by him.

The works of Jacob Bernoulli, Laplace's and Poisson's theorems are also of no small importance. They made probability theory more like a mathematical discipline. Probability theory, formulas and examples of basic tasks received their current form thanks to Kolmogorov’s axioms. As a result of all the changes, probability theory became one of the mathematical branches.

Basic concepts of probability theory. Events

The main concept of this discipline is “event”. There are three types of events:

  • Reliable. Those that will happen anyway (the coin will fall).
  • Impossible. Events that will not happen under any circumstances (the coin will remain hanging in the air).
  • Random. The ones that will happen or won't happen. They can be influenced by various factors that are very difficult to predict. If we talk about a coin, then there are random factors that can affect the result: the physical characteristics of the coin, its shape, its original position, the force of the throw, etc.

All events in the examples are indicated in capital Latin letters, with the exception of P, which has a different role. For example:

  • A = “students came to lecture.”
  • Ā = “students did not come to the lecture.”

In practical tasks, events are usually written down in words.

One of the most important characteristics of events is their equal possibility. That is, if you toss a coin, all variants of the initial fall are possible until it falls. But events are also not equally possible. This happens when someone deliberately influences an outcome. For example, “marked” playing cards or dice, in which the center of gravity is shifted.

Events can also be compatible and incompatible. Compatible events do not exclude each other's occurrence. For example:

  • A = “the student came to the lecture.”
  • B = “the student came to the lecture.”

These events are independent of each other, and the occurrence of one of them does not affect the occurrence of the other. Incompatible events are defined by the fact that the occurrence of one excludes the occurrence of another. If we talk about the same coin, then the loss of “tails” makes it impossible for the appearance of “heads” in the same experiment.

Actions on events

Events can be multiplied and added; accordingly, logical connectives “AND” and “OR” are introduced in the discipline.

The amount is determined by the fact that either event A or B, or two, can occur simultaneously. If they are incompatible, the last option is impossible; either A or B will be rolled.

Multiplication of events consists in the appearance of A and B at the same time.

Now we can give several examples to better remember the basics, probability theory and formulas. Examples of problem solving below.

Exercise 1: The company takes part in a competition to receive contracts for three types of work. Possible events that may occur:

  • A = “the firm will receive the first contract.”
  • A 1 = “the firm will not receive the first contract.”
  • B = “the firm will receive a second contract.”
  • B 1 = “the firm will not receive a second contract”
  • C = “the firm will receive a third contract.”
  • C 1 = “the firm will not receive a third contract.”

Using actions on events, we will try to express the following situations:

  • K = “the company will receive all contracts.”

In mathematical form, the equation will have the following form: K = ABC.

  • M = “the company will not receive a single contract.”

M = A 1 B 1 C 1.

Let’s complicate the task: H = “the company will receive one contract.” Since it is not known which contract the company will receive (first, second or third), it is necessary to record the entire series of possible events:

H = A 1 BC 1 υ AB 1 C 1 υ A 1 B 1 C.

And 1 BC 1 is a series of events where the firm does not receive the first and third contract, but receives the second. Other possible events were recorded using the appropriate method. The symbol υ in the discipline denotes the connective “OR”. If we translate the above example into human language, the company will receive either the third contract, or the second, or the first. In a similar way, you can write down other conditions in the discipline “Probability Theory”. The formulas and examples of problem solving presented above will help you do this yourself.

Actually, the probability

Perhaps, in this mathematical discipline, the probability of an event is the central concept. There are 3 definitions of probability:

  • classic;
  • statistical;
  • geometric.

Each has its place in the study of probability. Probability theory, formulas and examples (9th grade) mainly use the classic definition, which sounds like this:

  • The probability of situation A is equal to the ratio of the number of outcomes that favor its occurrence to the number of all possible outcomes.

The formula looks like this: P(A)=m/n.

A is actually an event. If a case opposite to A appears, it can be written as Ā or A 1 .

m is the number of possible favorable cases.

n - all events that can happen.

For example, A = “draw a card of the heart suit.” There are 36 cards in a standard deck, 9 of them are of hearts. Accordingly, the formula for solving the problem will look like:

P(A)=9/36=0.25.

As a result, the probability that a card of the heart suit will be drawn from the deck will be 0.25.

Towards higher mathematics

Now it has become a little known what the theory of probability is, formulas and examples of solving problems that come across in the school curriculum. However, probability theory is also found in higher mathematics, which is taught in universities. Most often they operate with geometric and statistical definitions of the theory and complex formulas.

The theory of probability is very interesting. It is better to start studying formulas and examples (higher mathematics) small - with the statistical (or frequency) definition of probability.

The statistical approach does not contradict the classical one, but slightly expands it. If in the first case it was necessary to determine with what probability an event will occur, then in this method it is necessary to indicate how often it will occur. Here a new concept of “relative frequency” is introduced, which can be denoted by W n (A). The formula is no different from the classic one:

If the classical formula is calculated for prediction, then the statistical one is calculated according to the results of the experiment. Let's take a small task for example.

The technological control department checks products for quality. Among 100 products, 3 were found to be of poor quality. How to find the frequency probability of a quality product?

A = “the appearance of a quality product.”

W n (A)=97/100=0.97

Thus, the frequency of a quality product is 0.97. Where did you get 97 from? Out of 100 products that were checked, 3 were found to be of poor quality. We subtract 3 from 100 and get 97, this is the amount of quality goods.

A little about combinatorics

Another method of probability theory is called combinatorics. Its basic principle is that if a certain choice A can be made in m different ways, and a choice B can be made in n different ways, then the choice of A and B can be made by multiplication.

For example, there are 5 roads leading from city A to city B. There are 4 paths from city B to city C. In how many ways can you get from city A to city C?

It's simple: 5x4=20, that is, in twenty different ways you can get from point A to point C.

Let's complicate the task. How many ways are there to lay out cards in solitaire? There are 36 cards in the deck - this is the starting point. To find out the number of ways, you need to “subtract” one card at a time from the starting point and multiply.

That is, 36x35x34x33x32...x2x1= the result does not fit on the calculator screen, so it can simply be designated 36!. Sign "!" next to the number indicates that the entire series of numbers is multiplied together.

In combinatorics there are such concepts as permutation, placement and combination. Each of them has its own formula.

An ordered set of elements of a set is called an arrangement. Placements can be repeated, that is, one element can be used several times. And without repetition, when elements are not repeated. n are all elements, m are elements that participate in the placement. The formula for placement without repetition will look like:

A n m =n!/(n-m)!

Connections of n elements that differ only in the order of placement are called permutations. In mathematics it looks like: P n = n!

Combinations of n elements of m are those compounds in which it is important what elements they were and what their total number is. The formula will look like:

A n m =n!/m!(n-m)!

Bernoulli's formula

In probability theory, as in every discipline, there are works of outstanding researchers in their field who have taken it to a new level. One of these works is the Bernoulli formula, which allows you to determine the probability of a certain event occurring under independent conditions. This suggests that the occurrence of A in an experiment does not depend on the occurrence or non-occurrence of the same event in earlier or subsequent trials.

Bernoulli's equation:

P n (m) = C n m ×p m ×q n-m.

The probability (p) of the occurrence of event (A) is constant for each trial. The probability that the situation will occur exactly m times in n number of experiments will be calculated by the formula presented above. Accordingly, the question arises of how to find out the number q.

If event A occurs p number of times, accordingly, it may not occur. Unit is a number that is used to designate all outcomes of a situation in a discipline. Therefore, q is a number that denotes the possibility of an event not occurring.

Now you know Bernoulli's formula (probability theory). We will consider examples of problem solving (first level) below.

Task 2: A store visitor will make a purchase with probability 0.2. 6 visitors independently entered the store. What is the likelihood that a visitor will make a purchase?

Solution: Since it is unknown how many visitors should make a purchase, one or all six, it is necessary to calculate all possible probabilities using the Bernoulli formula.

A = “the visitor will make a purchase.”

In this case: p = 0.2 (as indicated in the task). Accordingly, q=1-0.2 = 0.8.

n = 6 (since there are 6 customers in the store). The number m will vary from 0 (not a single customer will make a purchase) to 6 (all visitors to the store will purchase something). As a result, we get the solution:

P 6 (0) = C 0 6 ×p 0 ×q 6 =q 6 = (0.8) 6 = 0.2621.

None of the buyers will make a purchase with probability 0.2621.

How else is Bernoulli's formula (probability theory) used? Examples of problem solving (second level) below.

After the above example, questions arise about where C and r went. Relative to p, a number to the power of 0 will be equal to one. As for C, it can be found by the formula:

C n m = n! /m!(n-m)!

Since in the first example m = 0, respectively, C = 1, which in principle does not affect the result. Using the new formula, let's try to find out what is the probability of two visitors purchasing goods.

P 6 (2) = C 6 2 ×p 2 ×q 4 = (6×5×4×3×2×1) / (2×1×4×3×2×1) × (0.2) 2 × (0.8) 4 = 15 × 0.04 × 0.4096 = 0.246.

The theory of probability is not that complicated. Bernoulli's formula, examples of which are presented above, is direct proof of this.

Poisson's formula

Poisson's equation is used to calculate low probability random situations.

Basic formula:

P n (m)=λ m /m! × e (-λ) .

In this case λ = n x p. Here is a simple Poisson formula (probability theory). We will consider examples of problem solving below.

Task 3: The factory produced 100,000 parts. Occurrence of a defective part = 0.0001. What is the probability that there will be 5 defective parts in a batch?

As you can see, marriage is an unlikely event, and therefore the Poisson formula (probability theory) is used for calculation. Examples of solving problems of this kind are no different from other tasks in the discipline; we substitute the necessary data into the given formula:

A = “a randomly selected part will be defective.”

p = 0.0001 (according to the task conditions).

n = 100000 (number of parts).

m = 5 (defective parts). We substitute the data into the formula and get:

R 100000 (5) = 10 5 /5! X e -10 = 0.0375.

Just like the Bernoulli formula (probability theory), examples of solutions using which are written above, the Poisson equation has an unknown e. In fact, it can be found by the formula:

e -λ = lim n ->∞ (1-λ/n) n .

However, there are special tables that contain almost all values ​​of e.

De Moivre-Laplace theorem

If in the Bernoulli scheme the number of trials is sufficiently large, and the probability of occurrence of event A in all schemes is the same, then the probability of occurrence of event A a certain number of times in a series of tests can be found by Laplace’s formula:

Р n (m)= 1/√npq x ϕ(X m).

X m = m-np/√npq.

To better remember Laplace’s formula (probability theory), examples of problems are below to help.

First, let's find X m, substitute the data (they are all listed above) into the formula and get 0.025. Using tables, we find the number ϕ(0.025), the value of which is 0.3988. Now you can substitute all the data into the formula:

P 800 (267) = 1/√(800 x 1/3 x 2/3) x 0.3988 = 3/40 x 0.3988 = 0.03.

Thus, the probability that the flyer will work exactly 267 times is 0.03.

Bayes formula

The Bayes formula (probability theory), examples of solving problems with the help of which will be given below, is an equation that describes the probability of an event based on the circumstances that could be associated with it. The basic formula is as follows:

P (A|B) = P (B|A) x P (A) / P (B).

A and B are definite events.

P(A|B) is a conditional probability, that is, event A can occur provided that event B is true.

P (B|A) - conditional probability of event B.

So, the final part of the short course “Probability Theory” is the Bayes formula, examples of solutions to problems with which are below.

Task 5: Phones from three companies were brought to the warehouse. At the same time, the share of phones that are manufactured at the first plant is 25%, at the second - 60%, at the third - 15%. It is also known that the average percentage of defective products at the first factory is 2%, at the second - 4%, and at the third - 1%. You need to find the probability that a randomly selected phone will be defective.

A = “randomly picked phone.”

B 1 - the phone that the first factory produced. Accordingly, introductory B 2 and B 3 will appear (for the second and third factories).

As a result we get:

P (B 1) = 25%/100% = 0.25; P(B 2) = 0.6; P (B 3) = 0.15 - thus we found the probability of each option.

Now you need to find the conditional probabilities of the desired event, that is, the probability of defective products in companies:

P (A/B 1) = 2%/100% = 0.02;

P(A/B 2) = 0.04;

P (A/B 3) = 0.01.

Now let’s substitute the data into the Bayes formula and get:

P (A) = 0.25 x 0.2 + 0.6 x 0.4 + 0.15 x 0.01 = 0.0305.

The article presents probability theory, formulas and examples of problem solving, but this is only the tip of the iceberg of a vast discipline. And after everything that has been written, it will be logical to ask the question of whether the theory of probability is needed in life. It’s difficult for an ordinary person to answer; it’s better to ask someone who has used it to win the jackpot more than once.

Loading...