Approximate Expected Value of Continuous Random Variable Matlab
Are my results significant?
William Menke , Joshua Menke , in Environmental Data Analysis with MatLab, 2012
11.3 Four important probability density functions
A great deal (but not all) of hypothesis testing can be performed using just four probability density functions. Each corresponds to a different function of the error, e, which is presumed to be uncorrelated and Normally distributed with zero mean and unit variance:
(11.10)
Probability density function 1 is just the Normal probability density function with zero mean and unit variance. Note that any Normally distributed variable, d, with mean, , and variance, σd 2, can be transformed into one with zero mean and unit variance with the transformation, .
Probability density function 2 is the chi-squared probability density function, which we discussed in detail in the previous section.
Probability density function 3 is new and is called Student's t-probability density function. It is the ratio of a Normally distributed variable and the square root of the sum of squares of N Normally distributed variables (the e in the numerator is assumed to be different from the es in the denominator). It can be shown to have the functional form
(11.11)
The t-probability density function (Figure 11.3) has a mean of zero and, for N > 2, a variance of N/(N − 2) (its variance is undefined for N ≤ 2). Superficially, it looks like a Normal probability density function, except that it is longer-tailed (i.e., it falls off with distance from its mean much more slowly than does a Normal probability density function. In MatLab, the t-probability density function is computed as
-
pt = tpdf(t,N);
(MatLab eda11_03)
where t is a vector of t values.
Probability density function 4 is also new and is called the Fisher-Snedecor F-probability density function. It is the ratio of the sum of squares of two different sets of random variables. Its functional form cannot be written in terms of elementary functions and is omitted here. Its mean and variance are
(11.12)
Note that the mean of the F-probability density function approaches as M → ∞. For small values of M and N, the F-probability density function is skewed towards low values of F. At large values of M and N, it is more symmetric around F = 1. In MatLab, the F-probability density function is computed as
-
pF = fpdf(F,N,M);
(MatLab eda11_04)
where F is a vector of F values.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123918864000118
Data Analysis Concepts and Observational Methods
Lynne D. Talley , ... James H. Swift , in Descriptive Physical Oceanography (Sixth Edition), 2011
6.3.2 Probability Density Function
The probability density function (pdf) is the most basic building block for statistical description of a variable. Although the reader is much more likely to encounter spectral analysis or empirical orthogonal functions (see the following sections) in various publications, it is best to introduce pdfs first in order to develop intuition about estimates and confidence intervals.
The pdf of the true field is a measure of how likely the variable is to have a certain value. The probability of falling somewhere in the entire range of possible values is 1. The observed pdf is basically a histogram, that is, counts of the number of occurrences of a value in a given range. The histogram is then "normalized" to produce the pdf by dividing by the total number of observations and the bin widths. The more observations there are, the closer the histogram comes to the pdf of the true field, assuming that observational bias error is low (accuracy of observations is high).
Probability distribution functions can have many different shapes, depending on the variable and on the physical processes. As an example, from Gille (2005), a time series of wind velocity from an ocean buoy off the coast of southern California is shown in Figure 6.1. The data are hourly samples for four years. To compute the pdfs, the number of samples of velocities/speeds in each 0.1 m/sec bin was counted to create a histogram; for the pdf, the values in each bin were normalized by dividing by the total number of hourly samples (43,797) and by the bin width (0.1). The two wind velocity pdfs are somewhat symmetric about 0, but they are not quite Gaussian (bell-shaped, Eq. 6.5). The wind speed pdf cannot be centered at 0 since wind speeds can only be positive; this pdf resembles a Rayleigh distribution, which has positive values only, a steep rise to a maximum and then a more gradual fall toward higher values.
A pdf with a uniform distribution would have equal likelihoods of any value within a given range. The pdf would look like a "block." Random numbers generated by a random number generator, for instance, could have a uniform distribution (the same number of occurrences for each value).
One special form of pdf has a "bell shape" around the mean value of the variable. Such a pdf is called a Gaussian distribution or a normal distribution. Expressed mathematically, a pdf of the variable x with a normal distribution is
(6.5)
where the mean is defined in Eq. (6.1) and the standard deviation σ in Eq. (6.2). A field that is the sum of random numbers has a normal distribution. The pdf associated with calculating the mean value has a normal distribution. Thus if we measure a large number of sample means of the same variable, the distribution of these mean values would be normal.
The pdf associated with a sum of squared random variables is called a chi-squared distribution. Squared variables show up in basic statistics in the variance (6.2), so the chi-squared distribution is important for estimates of variance. Gaussian and chi-squared distributions have a special place in statistical analysis, especially in assessing the quality of an estimate (confidence intervals), as described at the end of the next section.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978075064552210006X
Construction of Spatiotemporal Probability Laws
George Christakos , in Spatiotemporal Random Fields (Second Edition), 2017
Example 2.1
The M-PDF model
(2.5)
determines a fully independent trivariate PDF (T-PDF) model, whereas the model defined by the pair of conditions
(2.6a–b)
is a partially independent PDF model.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128030127000118
Lifetime Data Analysis
Dr Eduardo Calixto , in Gas and Oil Reliability Engineering (Second Edition), 2016
1.2.5 Loglogistic Probability Density Function
The loglogistic PDF, like the lognormal PDF shape, shows that most failures occur at the beginning of the life cycle and happen for the same reasons discussed before. The loglogistic PDF has two parameters: average (μ) and deviation (σ), which are called position and scale parameters, respectively. Again, whenever σ decreases, the PDF is pushed toward the mean, which means it becomes narrower and taller. And again, in contrast, whenever σ increases, the PDF spreads out away from the mean, which means it becomes broader and shallower. The loglogistic PDF is also skewed to the right, and because of this, equipment will often fail at the beginning of the life cycle, as in the lognormal PDF case. Mathematically, loglogistic PDFs are represented by:
where:
and t = life cycle time.
The loglogistic reliability function is represented by the equation:
For example, Fig. 1.23 shows the loglogistic PDF that also represents corrosion in a furnace. Note that there is little difference between the lognormal PDF (gray) and the loglogistic PDF (black).
Also note the loglogistic failure rate and its behavior over time. The loglogistic failure rate as well as the lognormal failure rate increase over time, and after specific periods of time decrease, as shown in Fig. 1.24. Comparing the loglogistic failure rate (black line) with the lognormal failure rate (gray line) it is possible to see in Fig. 1.24 that the loglogistic failure rate decreases faster than the logistic failure rate, even using the same historical failure data. Thus it is important to pay attention and choose the PDF that fits the historical failure data better to make the best decisions. We will now discuss the Gumbel PDF, which is skewed to the left, having the opposite mean of the lognormal and loglogistic distributions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128054277000014
Common Cause In Causal Inference
Peter Spirtes , in Philosophy of Statistics, 2011
3.1 Conditioning
The probability density function 9 of Y conditional on W = m (denoted f(Y |W = m)) represents the density of Y in the subpopulation where W = m, and is defined from the joint distribution as f(Y |W = m) = f(Y, W)/f(W = m) (where f(Y, W) is the joint density of Y and W, and f(W = m) ≠ 0). The conditional density depends only upon the joint density (assuming the values of the variables conditioned on do not have density 0), and does not depend upon the causal relationships. When a variable W is conditioned on, this intuitively represents seeing the value of W.
For simplicity, instead of using joint density to illustrate various concepts about conditioning and manipulating, the means of variables will be used instead. The mean of Y conditional on W = m will be denoted E(Y |W = m). In SEM C(θ), it can easily be shown that E C(θ)(Y |W = m) = m · cov C(θ)(Y, W) = m · b Y Z · b Z W (where b Y Z and b Z W are the structural coefficients in C(θ).)
Conditional densities and conditional probabilities are useful for problems (e.g. diagnosis) in which the value of the variable of interest is expensive or difficult to measure, but other related variables that have information about the variable of interest are more easily observed. For example, the fact that the probability of measles conditional on spots is much higher than the probability of measles conditional on no spots is useful information in diagnosing whether someone has measles, since directly measuring the presence of measles is much more difficult and expensive than observing that they have spots.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444518620500253
Some Comments on Probability Theory
William Menke , in Geophysical Data Analysis (Fourth Edition), 2018
2.4 Gaussian Probability Density Functions
The probability density function for a particular random variable can be arbitrarily complicated, but in many instances, data possess the rather simple Gaussian (or Normal) probability density function
(2.21)
This probability density function has mean 〈d〉 and variance σ 2 (Fig. 2.12). The Gaussian probability density function is so common because it is the limiting probability density function for the sum of random variables. The central limit theorem shows (with certain limitations) that regardless of the probability density function of a set of independent random variables, the probability density function of their sum tends to a Gaussian distribution as the number of summed variables increases. As long as the noise in the data comes from several sources of comparable size, it will tend to follow a Gaussian probability density function. This behavior is exemplified by the sum of the two uniform probability density functions in Section 2.3. The probability density function of their sum is more nearly Gaussian than the individual probability density functions (it being triangular instead of rectangular).
The joint probability density function for two independent Gaussian variables is just the product of two univariate probability density functions. When the data are correlated (say, with mean 〈d〉 and covariance [cov d]), the joint probability density function is more complicated, since it must express the degree of correlation. The appropriate generalization can be shown to be
(2.22)
Note that this probability density function reduces to Eq. (2.21) in the special case of N = 1 (where [cov d] becomes σ d 2). It is perhaps not apparent that the general case has an area of unity, a mean of 〈d〉, and a covariance matrix of [cov d]. However, these properties can be derived by inserting Eq. (2.22) into the relevant integral and by transforming to the new variable y = [cov d]−½ [d − 〈d〉] (whence the integral becomes substantially simplified).
When p(d) (Eq. 2.22) is transformed using the linear rule m = Md, the resulting p(m) is also Gaussian in form with mean 〈m〉 = M〈d〉 and covariance matrix [cov m] = M[cov d]M T. Thus, all linear functions of Gaussian random variables are themselves Gaussian.
In Chapter 5, we will show that the information contained in each of two probability density functions can be combined by multiplying the two distributions. Interestingly, the product of two Gaussian probability density functions is itself Gaussian (Fig. 2.13). Given Gaussian p A(d) with mean 〈d A〉 and covariance [cov d]A and Gaussian p B(d) with mean 〈d B〉 and covariance [cov d]B, the product p C(d) = p A(d) p B(d) is Gaussian with mean and variance (e.g., Menke and Menke, 2011, their Section 5.4)
(2.23)
The idea that the model and data are related by an explicit relationship g(m) = d can be reinterpreted in light of this probabilistic description of the data. We can no longer assert that this relationship can hold for the data themselves, since they are random variables. Instead, we assert that this relationship holds for the mean data: g(m) = 〈d〉. The distribution for the data can then be written as
(2.24)
The model parameters now have the interpretation of a set of unknown quantities that define the shape of the distribution for the data. One approach to inverse theory (which will be pursued in Chapter 5) is to use the data to determine the distribution and thus the values of the model parameters.
For the Gaussian distribution (Eq. 2.24) to be sensible, g(m) must not be a function of any random variables. This is why we differentiated between data and auxiliary variables in Chapter 1; the latter must be known exactly. If the auxiliary variables are themselves uncertain, then they must be treated as data and the inverse problem becomes an implicit one with a much more complicated distribution than the above problem exhibits.
As an example of constructing the distribution for a set of data, consider an experiment in which the temperature di in some small volume of space is measured N times. If the temperature is assumed not to be a function of time and space, the experiment can be viewed as the measurement of N realizations of the same random variable or as the measurement of one realization of N distinct random variables that all have the same distribution. We adopt the second viewpoint.
If the data are independent Gaussian random variables with mean 〈d〉 and variance σ d 2 so that [cov d] = σ d 2 I, then we can represent the assumption that all the data have the same mean by an equation of the form Gm = d:
(2.25)
where m 1 is a single model parameter. We can then compute explicit formulas for the expressions in p(d) as
(2.26)
The joint distribution is therefore
(2.27)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128135556000022
Probability and Statistics
Guangren Shi , in Data Mining and Knowledge Discovery for Geoscientists, 2014
2.1.2.3 Density Function and Distribution Function of Probability
- 1.
-
Probability density function p(x):
(2.3)
For example, a uniform distributed probability density function is
It is shown in Figure 2.1 that the probability density function expresses the value of each random variable, and the area enclosed by the probability density function and the random variable axis (x-coordinate axis) is 1 which coincides with the definition of Equation (2.3).
- 2.
-
Probability distribution function F(x)
(2.4)
(2.5)
-
For example, a uniform distributed probability distribution function is
It is easy to see that the relationship between the uniform distributed probability density function and the probability distribution function meets the conditions of Equations (2.4) and (2.5).
Figure 2.2 shows that the probability distribution function expresses the cumulative value of the probability along the random variable axis (x-coordinate axis), which is the definition of Equation (2.4).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124104372000023
Statistical Models and SCR
J.Andrew Royle , ... Beth Gardner , in Spatial Capture-recapture, 2014
2.1.2 Properties of probability distributions
A pdf or a pmf is a function like any other function in the sense that it has one or more arguments whose values determine the result of the function. However, probability functions have a few properties that distinguish them from other functions. The first is that the function must be non-negative for all possible values of the random variable, i.e., . The second requirement is that the integral of a pdf must be unity, , and similarly for a pmf, the summation over all possible values is unity, . The following R code demonstrates this for the normal and binomial distributions:
-
> integrate(dnorm, -Inf, Inf, mean=0, sd=1)$value
-
[1] 1
-
> sum(dbinom(0:5, size=5, p=0.1))
-
[1] 1
This requirement is important to remember when one develops a non-standard probability distribution. For example, in Chapters 11 and 13 Chapter 11 Chapter 13 , we work with a resource selection function whose probability density function is not one that is pre-defined in software packages such as R or BUGS.
Another feature of probability distributions is that they can be used to compute important summaries of random variables. The two most important summaries are the expected value, , and the variance . The expected value, or mean, can be thought of as the average of a very large sample from the specified distribution. For example, one way of approximating the expected values of a binomial distribution with trials and can be implemented in R using:
-
> mean(rbinom(10000, 20, 0.3))
-
[1] 6.9865
For most probability distributions used in this book, the expected values are known exactly, as shown in Table 2.1, and thus we don't need to resort to simulation methods. For instance, the expected value of the binomial distribution is exactly . In this case, it happens to take an integer value, but this is not a necessary condition, even for discrete random variables.
Distribution | Notation | pdf or pmf | Support | Mean | Variance Var ( ) |
---|---|---|---|---|---|
Discrete random variables | |||||
Poisson | |||||
Bernoulli | |||||
Binomial | |||||
Multinomial | |||||
Continuous random variables | |||||
Normal | |||||
Uniform | |||||
Beta | |||||
Gamma | |||||
Multivariate Normal |
A more formal definition of an expected value is the average of all possible values of the random variable, weighted by their probabilities. For continuous random variables, this weighted average is found by integration:
(2.1.3)
For example, if is normally distributed with mean 3 and unit variance (variance equal to 1), we could find the expected value using the following code.
-
> integrate(function(x) x*dnorm (x, 3, 1), -Inf, Inf)
-
3 with absolute error < 0.00033
Of course, the mean is the expected value of the normal distribution, so we didn't need to compute the integral but, the point is, that Eq. (2.1.3) is generic. For discrete random variables, the expected value is found by summation rather than integration:
(2.1.4)
where the summation is over all possible values of . Earlier we approximated the expected value of the binomial distribution with trials and by taking a Monte Carlo average. Equation (2.1.4) let's us find the exact answer, using this bit of R code:
-
> sum(dbinom(0:100, 20, 0.35)*0:100)
-
[1] 7
This is great. But of what use is it? One very important concept to understand is that when we fit models, we are often modeling changes in the expected value of some random variable. For example, in Poisson regression, we model the expected value of the random variable, which may be a function of environmental variables.
The ability to model the expected value of a random variable gets us very far, but we also need a model for the variance of the random variable. The variance describes the amount of variation around the expected value. Specifically, . Clearly, if the variance is zero, the variable is not random as there is no uncertainty in its outcome. For some distributions, notably the normal distribution, the variance is a parameter to be estimated. Thus, in ordinary linear regression, we estimate both the expected value , which may be a function of covariates, and the variance , or similarly the residual standard error . For other distributions, the variance is not an explicit parameter to be estimated, and instead, the mean to variance ratio is fixed. In the case of the Poisson distribution, the mean is equal to the variance, . A similar situation is true for the binomial distribution—the variance is determined by the two parameters and . In our earlier example with and , the variance is 4.55. Toying around with these ideas using random number generators may be helpful. Here is some code to illustrate some of these basic concepts:
> 20* 0.35*(1--0.35) | # Exact variance, Var(x) |
[1] 4.55 | |
> x <- rbinom(100000, 20, 0.35) | |
> mean((x-mean (x))ˆ2) | # Monte Carlo approximation |
[1] 4.545525 |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124059399000025
Reliability, Availability, and Maintainability
Ian Sutton , in Process Risk and Reliability Management (Second Edition), 2015
Failure Rates
The probability density function, f(t), is defined as the probability of failure in any time interval dt. The cumulative distribution function, F(t), is the integral of f(t).
(16.3)
where F(t) can be either of the following two meanings:
- 1.
-
For a population of a particular item, it is the fraction of all units of the original population that have failed by time t.
- 2.
-
It is the probability that a particular item will have failed by time t.
F(t) will generally have a shape such as that shown in Figure 16.6. It asymptotically approaches a value of 1.0.
The instantaneous failure rate of an item, i.e., the likelihood that it will fail at time t given that it has survived to that time is called the hazard rate, h(t). It is defined in Eq. (16.4).
(16.4)
where h(t) is not a probability and can have a value that is >1.0.
The following terms are used when calculating failure rates. (Other authorities use different definitions, particularly for MTBF (mean time between failures) and MTTF (mean time to failure), so care should be taken when using data from different sources.)
- •
-
Mean Time to Failure (MTTF)
The mean of an equipment item's operating times, i.e., the time from when an item is put into to operation to the time when it fails.
- •
-
Mean Time to Repair (MTTR)
The mean time it takes to repair an equipment item. It is formally defined as the "total corrective maintenance time divided by the number of corresponding maintenance actions during a given period of time."
- •
-
Mean Downtime (MDT)
MDT and MTTR are often treated as being the same. However, some analysts distinguish between MTTR, which is just the repair time itself, and MDT, the total time needed to bring an item back into service, including the time for shutdown activities such as waiting for technicians to be available, transporting items to and from the work site, and the ordering of spare parts.
- •
-
Mean Time between Failures (MTBF)
MTBF is the mean of the time between the failures for any particular item. It includes both operating and repair time. It relates to the other terms as shown in Eq. (16.5) and Figure 16.7.
(16.5)
One potential source of confusion is the meaning of the term "failure rate." When f(t) does not vary with time, then the failure rate is simply MDT/MTBF. If f(t) is not constant with respect to time, then either F(t) or h(t) can be defined as being the "failure rate." In this chapter, h(t) is taken to be the failure rate.
Constant/Exponential Distribution
The most widely used type of failure rate is the constant—or exponential—distribution. The use of the word exponential for constant failure rate seems contradictory. However, the constant failure rate refers to h(t), not f(t). In other words, as time progresses the number of items that were operable at time t(0) will decline. However, those items that do survive to time t have the same rate of failure as they had at time t(0).
For the constant failure rate, f(t), F(t), and h(t) are defined as follows:
(16.6)
(16.7)
(16.8)
(16.9)
where λ (lambda) is the overall failure rate.
Lognormal Distribution
The lognormal distribution is illustrated in Figure 16.8. It shows that the bulk of the failures occur early on, but that some items survive for a long time. This type of curve can be useful when considering the impact of maintenance on overall failure rates. The probability of an item being repaired in a very short time is low because it takes a finite amount of time to assemble workers and spare parts. (Figure 16.8 shows that h(t) is zero for the lowest values of t, indicating that MDT cannot be zero, i.e., it always takes a finite time for the item to be repaired.) At the other end of the curve, the repair time may be stretched out if a critical spare part is not available. Most of the failures occur near the peak of the curve.
Bathtub Curve
An item's failure rate is generally not a single value—it will vary with time and the age of the item. The bathtub curve, shown in Figure 16.9, illustrates this phenomenon (the term "bathtub" comes from the rather fanciful resemblance of the shape of the overall failure rate to that of a bathtub).
In practice, few equipment items exhibit the behavior shown in Figure 16.9. For example, pressure vessels do not wear out—they generally fail due to the impact of an external event, such as the addition of corrosive materials. Nevertheless, the sketch is useful for understanding and categorizing the different ways in which equipment items can fail.
Early failures
Some components fail soon after they are placed in service. This phenomenon—sometimes referred to as "infant mortality"—usually results from problems in the manufacture or commissioning of the item. Not all equipment types exhibit wear-in behavior. For example, pressure vessels that have been fabricated and inspected to the appropriate standards are not likely to suffer from early failures.
Constant failure rate
The constant failure rate curve shown in Figure 16.9 is a straight line corresponding to the exponential failure rate already discussed. It represents random events that occur independently of time. For example, operator error that can take place at any time.
Wear-out failures
As equipment ages, particularly equipment containing moving parts, internal components will gradually wear out, thus leading to an increased failure rate. In practice, many items are prevented from reaching the wear-out point through the use of preventive maintenance and risk-based inspection programs. In other situations, the supplier of the equipment item may simply specify its "shelf life," after which the item will be replaced before it fails.
Reliability Block Diagrams
Reliability block diagrams (RBDs) are similar to block flow diagrams (BFDs) (described in Chapter 16). A BFD shows the flow of gases, liquids, and solids through the process. An RBD illustrates the "flow" of reliability from the front of the plant to the back.
Figure 16.10 shows the simple block flow diagram that was introduced in the first standard example. The process consists of four processing areas in series (100–400), each of which receives utilities from Area 500.
The corresponding RBD is shown in Figure 16.11. In order for the system to function, it is essential that all five areas operate—if any one of them fails, then the overall system fails. The key difference between Figures 16.10 and 16.11 is that Unit 500, the utility system, is shown as being in the chain of events; if the utilities fail then the overall system fails.
A similar analysis can be carried out using the same example for the tank, pump, and vessel system (Example 2 in Chapter 1). The BFD for the system is shown in Figure 16.12.
The RBD for this system is identical to the BFD. There are two paths to reliable operation:
- 1.
-
T-100/P-101A/V-101
- 2.
-
T-100/P-101B/V-101.
A key difference between BFDs and RBDs is that the first can incorporate recycle streams; the second cannot. This difference is illustrated in Figures 16.13 and 16.14, which show the heat recovery system for a distillation process.
Cold feed is heated first by the overheads stream from the column in Exchanger E-1, then by the bottoms stream in Exchanger E-2. The overhead stream from C-1 is further cooled and condensed in either E-3A or E-3B (either has sufficient capacity for the full service).
The schematic shown in Figure 16.13 is quite complex since it involves two recycle streams. However, the RBD for this system is much simpler, as shown in Figure 16.14, because there is no recycle function in the reliability "flow." If any one of the items in the system fails (with the exception of either E-3A or E-3B), then the system fails.
Active/Standby Redundancy
Backup equipment can be either active or on standby. If it is active, then it is operating alongside the primary item and will continue to fulfill the system function should the primary fail. However, it is more common to have a situation where the spare item is on standby. Following the failure of the primary equipment item, the backup has to be brought on line sufficiently quickly to prevent interruption to the system's function. This means that there has to be a mechanism for making the switchover—either automated for manual. In practice, the switching process can itself be a source of unreliability—particularly if it is not tested all that frequently.
Using Figure 16.14 as an example, if E-3A fails (say one of its tubes suddenly develops a large leak), then the following sequence of events must take place in order for there to be a successful (manual) switch to E-3B:
- •
-
The system instrumentation must signal that a problem has occurred.
- •
-
The operator must understand the instrument signals and correctly diagnose the cause of the problem.
- •
-
He or she needs to switch the exchangers, making sure that valves are closed and opened in the correct sequence.
- •
-
All of the above must take place in a timely manner.
The modified RBD for this new system is shown in Figure 16.15. If either E-3A or E-3B is operating, then the system is operating. However, if one of the exchangers fails, then the switchover mechanism must work properly, and the alternate exchanger must work.
Quantification of Block Diagrams
The quantification of block diagrams follows the same general principles as used for quantifying fault trees and event trees (see Chapter 15). To calculate the probability of success for each path in a block diagram, the probabilities for each item in that path are multiplied. In the case of Figure 16.15, the probability of success for the two paths is as follows:
(16.10)
and
(16.11)
To calculate the probability of success for the system, the set values are added to one another. Therefore, in this example, the probability that the system will successfully operate is:
(16.12)
The following values are used for illustrative purposes:
p(E1) | 0.95 |
p(E2) | 0.96 |
p(C1) | 0.99 |
p(E-3A) | 0.90 |
p(E-3B) | 0.80 |
When these values are inserted into Eqs. (16.10)–(16.12), the probability of success for each train and for the overall system becomes:
p(1) | 0.813 |
p(2) | 0.722 |
p(System) | 0.948 |
It was noted in Chapter 15 that the second-order term for fault tree analysis is often not significant when probability values are low. With reliability work, where values typically are high, the second-order term must be used. If it is excluded then the probability of success is 1.535—an obvious anomaly.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128016534000163
STOCHASTIC VARIABLES AND PROCESSES
Dongxiao Zhang , in Stochastic Methods for Flow in Porous Media, 2002
2.1.4 Statistical Moments
Although the probability density function pU (u) provides complete information about the random variable U, it is usually of great interest to obtain some statistical moments of U. For example, the first moment (i.e., the mean or expectation) of U can be derived with aid of pU (u),
(2.25)
where both 〈〉 and E( ) indicate expectation. In this book, these two notations are used interchangeably.
Similarly, the expectation of any given function V = g(U) of the random variable U is given by
(2.26)
It appears that we must first find its PDF pV (v) in order to obtain the expectation E(V). Finding pV (v) is not always straightforward for a nonmonotonical function g(U). Fortunately, there is a basic theorem that avoids this procedure. That theorem reads as
(2.27)
which states mathematically that E(V) for any given function V = g(U) can be expressed directly in terms of the PDF pU (u) of U and the function g(u). The proof of the theorem (2.27) shall not be given here, for which the reader is referred to Papoulis [1991].
With Eq. (2.27), one may define other moments of U by letting g(U) = Um where m is a nonnegative integer. For example, the zeroth moment for m = 0 is the integration of the PDF over the entire probability space, giving the value of one; the second moment for m = 2 is the mean square of U. One may also define the central moments of U by letting g(U) = (U − 〈U〉) m , which are more commonly used for m ≥ 2. The first central moment of U is the mean of the fluctuation U′ ≡ U − 〈U〉, which is always zero. The second central moment is the variance of U,
(2.28)
The square root of the variance is the standard deviation (σ U ) of U, which is a measure of the magnitude of the fluctuations of U about its mean 〈U〉. The third central moment is the skewness sU = E(U′3), which measures the lack of symmetry of the distribution of U. Each higher moment gives some additional information about the structure of pU (u). As a matter of fact, it may need a complete knowledge of all moments to reconstruct pU (u).
One may want to prove that for the normal random variable with the PDF (2.11), the mean is μ, the variance is σ2, and the skewness is zero. For a normal distribution, the first two central moments provide complete information because the higher moments can be expressed in terms of them,
(2.29)
For example, the fourth central moment, called the kurtosis, is E(U′4) = 3σ4.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780127796215500036
Source: https://www.sciencedirect.com/topics/earth-and-planetary-sciences/probability-density-function
0 Response to "Approximate Expected Value of Continuous Random Variable Matlab"
Post a Comment