Q
stringlengths
70
13.7k
A
stringlengths
28
13.2k
meta
dict
Conditional Multivariate Gaussian Identity I'm trying to verify the form of a multivariate Gaussian provided in a paper I'm reading. It should be pretty elementary. Let $Y=X+\varepsilon$ where $X\sim N(0,C)$ and $\varepsilon\sim N(0,\sigma^2\mathbf{I})$. The authors then claim that $$ X|Y,C,\sigma^2 \sim N(\mu,\Sigma),...
This is a correct representation of the conditional variance. Since $$\begin{pmatrix} X\\ \epsilon \end{pmatrix}\sim N\Big(\begin{pmatrix} 0\\ 0 \end{pmatrix},\begin{pmatrix} C & \mathbf O\\ \mathbf O & \sigma^2\mathbf I \end{pmatrix}\Big)$$ and $$\begin{pmatrix} X\\ Y \end{pmatrix} = \begin{pmatrix} \mathbf 1^\text{T...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/481518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Discrete probability distribution involving curtailed Riemann zeta values $\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Var}{\operatorname{Var}}$We define the discrete random variable $X$ as having the probability mass function $$f_{X}(k) = \Pr(X=k) = \zeta(k)-1, $$ for $k \geq 2 $. Here, $\zeta(\cdot)$ is th...
To illustrate Whuber's comment $$\begin{array}{c|ccccccccc} & Y = 2 & Y =3& Y=4 & Y=5\\ \hline X =2 & \frac{1}{2^2} & \frac{1}{3^2} & \frac{1}{4^2} &\frac{1}{5^2} & \dots\\ X =3 & \frac{1}{2^3} & \frac{1}{3^3} & \frac{1}{4^3} & \frac{1}{5^3} & \dots\\ X =4 & \frac{1}{2^4} &\frac{1}{3^4} &\frac{1}{4^4} & \frac{1}...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/604631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Expected value of a random variable Random variable $X$ has the probability density function \begin{equation*} f\left( x\right) =\left\{ \begin{array}{ccc} n\left( \frac{x}{\theta }\right) ^{n-1} & , & 0<x\leqslant \theta \\ n\left( \frac{1-x}{1-\theta }\right) ^{n-1} & , & \theta \leqslant x<1% \end{array}% \right...
$$\begin{align*} {\rm E}[X^k] &= \int_{x=0}^\theta x^k n \biggl(\frac{x}{\theta}\biggr)^{\!n-1} \, dx + \int_{x=\theta}^1 x^k n \biggl(\frac{1-x}{1-\theta}\biggr)^{\!n-1} \, dx \\ &= \frac{n}{\theta^{n-1}} \int_{x=0}^\theta x^{n+k-1} \, dx + \frac{n}{(1-\theta)^{n-1}} \int_{x=0}^{1-\theta} (1-x)^k x^{n-1} \, dx \\ &= \...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/93972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
K-means++ algorithm I try to implement k-means++, but I'm not sure how it works. I have the following dataset: (7,1), (3,4), (1,5), (5,8), (1,3), (7,8), (8,2), (5,9), (8,0) From the wikipedia: * *Step 1: Choose one center uniformly at random from among the data points. let's say the first centroid is 8,0 *Step 2:...
For step 3, Choose one new data point at random as a new center, using a weighted probability distribution where a point x is chosen with probability proportional to $D(x)^2$. Compute all the $D(x)^2$ values and convert them to an array of cumulative sums. That way each item is represented by a range proportiona...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/135403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Equation for the variance inflation factors Following a question asked earlier, the variance inflation factors (VIFs) can be expressed as $$ \textrm{VIF}_j = \frac{\textrm{Var}(\hat{b}_j)}{\sigma^2} = [\mathbf{w}_j^{\prime} \mathbf{w}_j - \mathbf{w}_j^{\prime} \mathbf{W}_{-j} (\mathbf{W}_{-j}^{\prime} \mathbf{W}_{-j})^...
Assume all $X$ variables are standardized by the correlation transformation, like you mentioned, unit length scaled version of $\mathbf{X}$. The standardized model does not change the correlation between $X$ variables. $VIF$ can be calculated when standardized transformation of the original linear model is made. Let's ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/244468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Probability of a person correctly guessing at least one number out of the two number another person chooses Person A randomly chooses a number from 1 to 5 (inclusive) twice, so A ends up with 2 numbers chosen (can be the same number). Person B also makes a random choice from that list (only 1 number). What's the proba...
Your reasoning is correct up to the very last line: $$\mathbb{P}[\text{case 1}] \cdot\frac{1}{5} + \mathbb{P}[\text{case 2}]\cdot\frac{2}{5} = \frac{1}{5} \cdot \frac{1}{5} + \frac{4}{5} \cdot\frac{2}{5}$$ But this is not equal to $\frac{3}{5}$. Instead: $$\frac{1}{5} \cdot \frac{1}{5} + \frac{4}{5} \cdot\frac{2}{5} = ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/586515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to get the $INR(x_i)$ in PCA, the relative contribution of $x_i$ to the total inertia? Given the following data array : $$\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline J/I&1 & 2 & 3 & 4 & 5 & 6\\ \hline x & 1 & 0 & 0 & 2 & 1 & 2\\ y & 0 & 0 & 1 & 2 & 0 & 3\\ z & 0 & 1 & 2 & 1 & 0 & 2\\ \hline \end{array}$$ I can get th...
If I'm reading your question correctly, you're looking for inertia around an arbitrary point in your data cloud. The can be formulated as: $I_g-||\bar x-a||^2$. where. $I_g$ here is total inertia, $a$ is particular point in question. Here's a source on that: with derivation included. Page 8-9. https://cedric.cnam.f...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/277183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$ Suppose $X_1, ..., X_4$ are i.i.d $\mathsf N(\mu, \sigma^2)$ random variables. Give the UMVUE of $\frac{\mu^2}{\sigma}$ expressed in terms of $\bar{X}$, $S$, integers, and $\pi$. Here is a relevant question. I first note that if $X_1,....
I have skipped some details in the following calculations and would ask you to verify them. As usual, we have the statistics $$\overline X=\frac{1}{4}\sum_{i=1}^4 X_i\qquad,\qquad S^2=\frac{1}{3}\sum_{i=1}^4(X_i-\overline X)^2$$ Assuming both $\mu$ and $\sigma$ are unknown, we know that $(\overline X,S^2)$ is a complet...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/373936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Density of sum of truncated normal and normal distribution Suppose that $\varepsilon\sim N(0, \sigma_\varepsilon)$ and $\delta\sim N^+(0, \sigma_\delta)$. What is the density function for $X = \varepsilon - \delta$? This proof apparently appeared in a Query by M.A. Weinstein in Technometrics 6 in 1964, which stated tha...
Ultimately I needed to work through the algebra a bit more to arrive at the specified form. For posterity, the full proof is given below. Proof First consider the distribution function of $X$, which is given by $$F(x) = \Pr(X \leq x) = \Pr(\varepsilon - \delta \leq x)$$ $$= \int_{\varepsilon - \delta \leq x} f_\var...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/419722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
An inequality for a bi-modal hypergeometric distribution Say $X$ has a hypergeometric distribution with parameters $m$, $n$ and $k$, with $k\leq n<\frac12m$. I know that $X$ has a dual mode if and only if $d=\frac{(k+1)(n+1)}{m+2}$ is integer. In that case $P(X=d)=P(X=d-1)$ equals the maximum probability. I am wonderin...
In the case you are considering you have $P(X=d)=P(X=d-1)$ so let's consider the sign of $$\frac{P(X=d+1)}{P(X=d)}-\frac{P(X=d-2)}{P(X=d-1)} = \tfrac{(k-d)(n-d)}{(d+1) (m-k-n+d+1)} -\tfrac{ (d-1) (m-k-n+d-1)}{(k-d+2)(n-d+2)} \\= \tfrac{(k-d)(n-d)(k-d+2)(n-d+2)-(d-1) (m-k-n+d-1)(d+1) (m-k-n+d+1)}{(d+1) (m-k-n+d+1)(k-d...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/458289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Hellinger distance for two shifted log-normal distributions If I am not mistaken, Hellinger distance between P and Q is generally given by: $$ H^2(P, Q) = \frac12 \int \left( \sqrt{dP} - \sqrt{dQ} \right)^2 .$$ If P and Q, however, are two differently shifted log-normal distributions of the following form $$ {\frac {1}...
Note that \begin{align} H^2(P, Q) &= \frac12 \int (\sqrt{dP} - \sqrt{dQ})^2 \\&= \frac12 \int dP + \frac12 \int dQ - \int \sqrt{dP} \sqrt{dQ} \\&= 1 - \int \sqrt{dP} \sqrt{dQ} ,\end{align} and that the density function is 0 if $x \le \gamma$. Thus your question asks to compute \begin{align} 1 - H^2(P, Q) &= \int_{\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/361280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Range of integration for joint and conditional densities Did I mess up the range of integration in my solution to the following problem ? Consider an experiment for which, conditioned on $\theta,$ the density of $X$ is \begin{align*} f_{\theta}(x) = \frac{2x}{\theta^2},\,\,0 < x< \theta. \end{align*} Suppose the pr...
The univariate case seems correct to me. The multivariate case should be as follows: $$\begin{align*} g(x)=\int_{x_{[n]}}^1f_{\theta}(x)\pi(\theta)d\theta &= \int_{x_{[n]}}^1\prod_{i = 1}^n\left(\frac{2x_i}{\theta^2}\right)d\theta\\ &=\left(\prod_{i = 1}^n2x_i\right)\int_{x_{[n]}}^1\theta^{-2n}d\theta\\ &=\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/431601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
MLE Derivation for AR Model So I am trying to derive the MLE for an AR(1) model. Here are my thoughts thus far: The AR process is: $z_t = \delta + \psi_1z_{t-1} + \epsilon_t$ The expected value of $z_t = \frac{\delta}{1 - \psi_1}$. The variance of $z_t = \frac{1}{1 - \psi_1^2}$. So this is where I am getting caught up....
I'm not directly answering your question, but a quick note on the construction of the likelihood function in your model. The likelihood of a $T$-sized sample of $\mathbf{e} = \left[ \epsilon_{1}, \, \epsilon_{2}, \ldots, \, \epsilon_{T} \right]^{\mathsf{T}}$ of i.i.d. normal distributed $\epsilon \sim N(0,\sigma^{2})$ ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/511402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Posterior distribution of Normal Normal-inverse-Gamma Conjugacy Here is the setting: The likelihood of data is \begin{align} p(\boldsymbol{x} | \mu, \sigma^2) &= (\frac{1}{2\pi \sigma^2})^{\frac{n}{2}} exp\left\{ -\frac{1}{2\sigma^2} \sum\limits_{i=1}^n (x_i - \mu)^2 \right\} \nonumber \\ &= \frac{1}{(2\pi...
This is much simpler to prove compared with your earlier question. \begin{align} \sum\limits_{i=1}^n (x_i - \overline{x})^2 &+ \frac{V_0^{-1} n}{(V_0^{-1} + n)}(m_0 - \overline{x})^2 = \sum\limits_{i=1}^n x_i^2 - n\overline{x}^2\\ &\qquad + \frac{V_0^{-1} n}{(V_0^{-1} + n)}(m_0^2 - 2 m_0\overline{x} + \overline{x}^2)\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/512681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derivation of F-distribution from inverse Chi-square? I am trying to derive F-distribution from Chi-square and inverse Chi-square. Somewhere in process I make a mistake and result slightly differs from the canonical form of Fisher-Snedecor F distribution. Please, help find it. In order to derive p.d.f. of F-distributio...
In $$f_{\frac{\chi_n^2}{\chi_m^2}}(x) = \int \limits_{t=0}^{\infty} f_{\chi^2_n}(t) f_{\frac{1}{\chi^2_m}}(\frac{x}{t})dt$$ the Jacobian term is missing. Indeed, if $Z\sim\chi^2_n$ and $Y\sim\chi^{-2}_m$, and if $X=ZY$, the joint density of $(Z,X)$ is $$f_{\chi^2_n}(z) f_{\chi^{-2}_m}(\frac{x}{z})\left|\frac{\text dy}{...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/531835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Cumulative incidence of X Suppose the joint survival function of the latent failure times for two competing risks, $X$ and $Y$, is $S(x,y)=(1-x)(1-y)(1+0.5xy)$, $0<x<1$, $0<y<1$. Find the cumulative incidence function of $X$? I first solved the marginal cumulative distribution function of $X$: $(1-x)$. Then I tried to ...
For $0 \leq x \leq 1$ the cumulative incidence of $X$ is defined as $$ \mathbb P \left (X \leq x, X \leq Y \right) $$ To compute this probability we need to integrate the joint density of $(X,Y)$, $$ f(x,y) = \frac{3}{2} -x-y+2xy $$ over the set $\mathcal{A} \equiv \{(u,v) \in [0,1]^2 \mid u \leq x \wedge u \leq v \} $...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/550004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Non-IID Uniform Distribution $A$ is uniform (0, 2) and $B$ is uniform(1, 3). Find the Cov$(W, Z)$, where $W=\min(A,B)$ and $Z=\max(A,B).$ Since $WX = AB,$ then by independence of $A$ and $B$, $E(WZ) = E(A)E(B),$ so that $$Cov(WZ) = E(A) E (B) - E (W) E (Z) = (1)(2) - E (W) E (Z)$$ It suffices to find E(W) and E(Z) whi...
\begin{align*} F_{W}(w) = P(W\le w) = 1- [(1-P(A< w) )(1-P(B< w ))] = 1- \left[ \left( 1 - \frac{w}{2}\right) \left( 1 - \frac{w-1}{2}\right) \right] = 1 - \left[ \left( \frac{2-u}{2}\right) \left(\frac{3-u}{2}\right) \right]. \end{align*} Why does the notation $w$ change to $u$? And the PDF should be $$ 1 - F_W(w...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/603510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the P(A|C) if we know B depends on A and C depends on B? Given a Bayesian network that looks like the following: A->B->C How do we compute P(A|C)? My initial guess would be: P(A|C) = P(A|B) * P(B|C) + P(A|not B) * P(not B|C)
I would prefer $\Pr(A|C) = \Pr(A|C,B) \Pr(B|C) + \Pr(A|C, \text{not } B) \Pr(\text{not } B|C)$ and the following counterexample shows why there is a difference. Prob A B C 0.1 T T T 0.1 F T T 0.1 T F T 0.2 F F T 0.2 T T F 0.1 F T F 0.1 T F F 0.1 F F F Then in your formulation $\Pr(A|C)=\frac{2}{5}$, $\Pr(A|B...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/19024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Distribution of $XY$ if $X \sim$ Beta$(1,K-1)$ and $Y \sim$ chi-squared with $2K$ degrees Suppose that $X$ has the beta distribution Beta$(1,K-1)$ and $Y$ follows a chi-squared with $2K$ degrees. In addition, we assume that $X$ and $Y$ are independent. What is the distribution of the product $Z=XY$ . Update My attempt...
After some valuable remarks, I was able to find the solution: We have $f_X(x)=\frac{1}{B(1,K-1)} (1-x)^{K-2}$ and $f_Y(y)=\frac{1}{2^K \Gamma(K)} y^{K-1} e^{-y/2}$. Also, we have $0\le x\le 1$. Thus, if $x=\frac{z}{y}$, we get $0 \le \frac{z}{y} \le 1$ which implies that $z\le y \le \infty$. Hence: \begin{align} f_Z ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/183574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
The expected long run proportion of time the chain spends at $a$ , given that it starts at $c$ Consider the transition matrix: $\begin{bmatrix} \frac{1}{5} & \frac{4}{5} & 0 & 0 & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ 0 & \frac{1}{3} & \frac...
What is the expected long run proportion of time the chain spends at $a$, given that it starts at $b$? This exercise, technically, asks for the limiting probability value $\ell_b(a)$. You can note that the limiting distribution $\ell_b= \left(\frac{5}{13}, \frac{8}{13}, 0, 0, 0\right)$ that you correctly evaluated is...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/262912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Full-Rank design matrix from overdetermined linear model I'm trying to create a full-rank design matrix X for a randomized block design model starting from something like the example from page 3/8 of this paper (Wayback Machine) . It's been suggested that I can go about this by eliminating one of each of the columns fo...
The matrix $\mathbf{X}^\text{T} \mathbf{X}$ is called the Gramian matrix of the design matrix $\mathbf{X}$. It is invertible if and only if the columns of the design matrix are linearly independent ---i.e., if and only if the design matrix has full rank (see e.g., here and here). (So yes, these two things are closely...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/314022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to show this matrix is positive semidefinite? Let $$K=\begin{pmatrix} K_{11} & K_{12}\\ K_{21} & K_{22} \end{pmatrix}$$ be a symmetric positive semidefinite real matrix (PSD) with $K_{12}=K_{21}^T$. Then, for $|r| \le 1$, $$K^*=\begin{pmatrix} K_{11} & rK_{12}\\ rK_{21} & K_{22} \end{pmatrix}$$ is also a PSD m...
There is already a great answer by @whuber, so I will try to give an alternative, shorter proof, using a couple theorems. * *For any $A$ - PSD and any $Q$ we have $Q^TAQ$ - PSD *For $A$ - PSD and $B$ - PSD also $A + B$ - PSD *For $A$ - PSD and $q > 0$ also $qA$ - PSD And now: \begin{align*} K^* &= \begin{pmatr...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/322207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Variance of X+Y+XY? Assuming that random variables X and Y are independent, what is $\displaystyle Var((1+X)(1+Y)-1)=Var(X+Y+XY)$? Should I start as follows \begin{equation} Var((1+X)(1+Y)-1)\\ =Var((1+X)(1+Y))\\ =(E[(1+X)])^2 Var(1+Y)+(E[(1+Y])^2 Var(1+X)+Var(1+X)Var(1+Y) \end{equation} or maybe as follows \begin{equa...
For independent random variables $X$ and $Y$ with means $\mu_X$ and $\mu_Y$ respectively, and variances $\sigma_X^2$ and $\sigma_Y^2$ respectively, \begin{align}\require{cancel} \operatorname{var}(X+Y+XY) &= \operatorname{var}(X)+\operatorname{var}(Y)+\operatorname{var}(XY)\\ &\quad +2\cancelto{0}{\operatorname{cov}(X,...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/323905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A Pairwise Coupon Collector Problem This is a modified version of the coupon collector problem, where we are interested in making comparisons between the "coupons". There are a number of constraints which have been placed in order to make this applicable to the application of interest (not relevant here, but related to...
A Solution for Question 1 \begin{align*} E(X) &= \binom{M}{2}\left(1 - (1-p)^T\right) \\ V(X) &= \binom{M}{2}(1-p)^T\left(1 -\binom{M}{2}(1-p)^T\right) + 6\binom{M}{3}(1-q)^T + 6\binom{M}{4}(1-r)^T \end{align*} where \begin{align*} p &= \frac{K(K-1)}{M(M-1)} \\ q &= \frac{2\binom{M-2}{K-2} - \binom{M-3}{K-3}}{\binom{M...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/541014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculation of an "unconstrained" normal distribution (starting from a censored one) Assume that two r.v. $W$ and $Y|W=w$ with (1) $W \sim \text{N}(\mu_w,\sigma_w^2)$ (iid) (2) $Y|W=w \sim \text{N}(w,\sigma_y^2)$ (iid) Further we only observe $Y$ if $Y$ is less then $W$, i.e., (3) $Y|Y\le W$ Goal: Find the pdf of the ...
Ok. Let's do this, for CV's shake. First compact by setting $C=\frac{1}{\sqrt{2\pi\sigma^2_y}}\frac{1}{\sqrt{2\pi\sigma^2_w}} = \frac{1}{2\pi\sigma_y\sigma_w}$, so $$f_Y(y) =C \int_{-\infty}^{\infty}\exp\left\{-\frac{(y-w)^2}{2\sigma_y^2}\right\}\exp\left\{-\frac{(w-\mu_w)^2}{2\sigma_w^2}\right\}dw$$ We have $$exp\le...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/73157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Show that the value is, indeed, the MLE Let $ X_1, ... X_n$ i.i.d with pdf $$f(x;\theta)=\frac{x+1}{\theta(\theta+1)}\exp(-x/\theta), x>0, \theta >0$$ It is asked to find the MLE estimator for $\theta.$ The likelihood function is given by $$L(\theta;x)=[\theta(1-\theta)]^{-n}\exp\left(\frac{\sum_i x_i}{\theta}\right)\p...
Removing the multiplicative constants that do not depend on $\theta$, the likelihood function in this case is: $$\begin{equation} \begin{aligned} L_\mathbf{x}(\theta) &= \prod_{i=1}^n \frac{1}{\theta (\theta+1)} \cdot \exp \Big( - \frac{x_i}{\theta} \Big) \\[6pt] &= \frac{1}{\theta^n (\theta+1)^n} \cdot \exp \Big( - \...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/115962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Probability of two random variables being equal The question is as follows: Let $X_1$ ~ Binomial(3,1/3) and $X_2$ ~ Binomial(4,1/2) be independent random variables. Compute P($X_1$ = $X_2$) I'm not sure what it means to compute the probability of two random variables being equal.
Let $Z=X_1-X_2$ $P(Z=z)=\sum_{x_1=z+x_2}^\infty P(X_1=x,X_2=x-z)$ (since $X_1$ and $X_2$ are independent) $P(Z=z)=\sum_{x_1=z+x_2}^\infty P(X_1=x)P(X_2=x-z)\\=\sum_{x_1=z+x_2}^\infty \binom{3}{x}(\frac{1}{3})^x(\frac{2}{3})^{3-x}\binom{4}{4-x+z}(\frac{1}{2})^{x-z}(\frac{1}{2})^{4-x+z}$ When $X_1=X_2\Rightarrow z=0 $ Th...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/182691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A GENERAL inequality for a bi-modal hypergeometric distribution Say $X$ has a hypergeometric distribution with parameters $m$, $n$ and $k$, with $k\leq n<\frac12m$. I know that $X$ has a dual mode if and only if $d=\frac{(k+1)(n+1)}{m+2}$ is integer. In that case $P(X=d)=P(X=d-1)$ equals the maximum probability. See my...
You can turn the answer from the other question into an inductive proof for this question*. $$\tfrac{P(X=d+c+1)}{P(X=d+c)}-\tfrac{P(X=d-c-2)}{P(X=d-c-1)} = \tfrac{(k-d-c)(n-d-c)}{(d+1+c) (m-k-n+d+1+c)} -\tfrac{ (d-c-1) (m-k-n+d-c-1)}{(k-d+c+2)(n-d+c+2)} \\= \tfrac{(k-d-c)(n-d-c)(k-d+c+2)(n-d+c+2)-(d-c-1) (m-k-n+d-c-1)(...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/459012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Distribution of the pooled variance in paired samples Suppose a bivariate normal populations with means $\mu_1$ and $\mu_2$ and equal variance $\sigma^2$ but having a correlation of $\rho$. Taking a paired sample, it is possible to compute the pooled variance. If $S^2_1$ and $S^2_2$ are the sample variance of the first...
I'm not sure about a reference for this result, but it is possible to derive it relatively easily, so I hope that suffices. One way to approach this problem is to look at it as a problem involving a quadratic form taken on a normal random vector. The pooled sample variance can be expressed as a quadratic form of this...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/482118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Mean (or lower bound) of Gaussian random variable conditional on sum, $E(X^2| k \geq|X+Y|)$ Suppose I have two mean zero, independent Gaussian random variables $X \sim \mathcal{N}(0,\sigma_1^2)$ and $Y \sim \mathcal{N}(0,\sigma_2^2)$. Can I say something about the conditional expectation $E(X^2| k \geq|X+Y|)$? I thin...
Let's simplify a little. Define $$(U,V) = \frac{1}{\sqrt{\sigma_X^2+\sigma_Y^2}}\left(X+Y,\ \frac{\sigma_Y}{\sigma_X}X - \frac{\sigma_X}{\sigma_Y}Y\right).$$ You can readily check that $U$ and $V$ are uncorrelated standard Normal variables (whence they are independent). In terms of them, $$X = \frac{\sigma_X}{\sqrt{\s...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/596285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of getting imaginary roots Let $X$ and $Y$ be independent and identically distributed uniform random variable over $(0,1)$. Let $S=X+Y$. Find the probability that the quadratic equation $9x^2+9Sx+1=0$ has no real roots. What I attempted For no real roots we must have $$(9S)^2-4\cdot 1\cdot 9<0$$ So, we n...
Yes. And nowadays, it's easy to check for gross errors by simulation. Here is a MATLAB simulation: >> n = 1e8; sum((9*sum(rand(n,2),2)).^2-36 < 0)/n ans = 0.2223 In the real world, it's always good to check, or at least partially check, your work by different methods.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/345815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Conditional probability doubt Assume $$P(B|A) = 1/5,\ P(A) = 3/4,\ P(A \cap B) = 3/20,\ \textrm{and}\ P(¬B|¬A) = 4/7.$$ Find $P(B)$. What I tried: $$P(B)=\dfrac{ P(A \cap B)}{P(B|A)}=(3/20)/(1/5) = 3/4.$$ Answer is $P(B)=9/35.$ Where have I made the mistake?
The probability of B can be split into the probability given A and given not A $$P(B) = P(B|A) \cdot P(A) + P(B|\neg A) \cdot P(\neg A)$$ The negations can be replaced by one minus the actual and vice versa $$P(B) = \frac{1}{5} \cdot \frac{3}{4} + (1-P(\neg B| \neg A)) \cdot (1-P(A))$$ $$P(B) = \frac{3}{20} + (1-\frac{...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/367300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expectation of reciprocal of $(1-r^{2})$ If $r$ is the coefficient of correlation for a sample of $N$ independent observations from a bivariate normal population with population coefficietn of correlation zero, then $E(1-r^2)^{-1}$ is (a) $\quad\frac{(N-3)}{(N-4)}$ I tried finding expectation from the density funct...
From the problem statement, you are given that a sample of $N$ observations are made from a bivariate normal population with correlation coefficient equal to zero. Under these assumptions, the probability density function (PDF) for $r$ simplifies greatly to: \begin{eqnarray*} f_{R}(r) & = & \frac{\left(1-r^{2}\right)...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/384897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Iterated expectations and variances examples Suppose we generate a random variable $X$ in the following way. First we flip a fair coin. If the coin is heads, take $X$ to have a $Unif(0,1)$ distribution. If the coin is tails, take $X$ to have a $Unif(3,4)$ distribution. Find the mean and standard deviation of $X$. This ...
There are generally two ways to approach these types of problems: by (1) Finding the second stage expectation $E(X)$ with the theorem of total expectation; or by (2) Finding the second stage expectation $E(X)$, using $f_{X}(x)$. These are equivalent methods, but you might find one easier to comprehend, so I present the...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/404102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Can I find $f_{x,y,z}(x,y,z)$ from $f_{x,y+z}(x,y+z)$? Suppose I know densities $f_{x,y+z}(x,y+z)$, $f_y(y)$, $f_z(z)$, and $y$ and $z$ are independent. Given this information, can I derive $f_{x,y,z}(x,y,z)$?
It is tempting to think so, but a simple counterexample with a discrete probability distribution shows why this is not generally possible. Let $(X,Y,Z)$ take on the eight possible values $(\pm1,\pm1,\pm1).$ Let $0\le p\le 1$ be a number and use it to define a probability distribution $\mathbb{P}_p$ as follows: * *$\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/520274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Integration of a equation $$\int_{x}^{y}\left[\sum_{i=1}^{N}\sqrt{a}\cos\left(\frac{2\pi(d_{i}-a)}{\lambda} \right)\right]^{\!2}da$$ Can anyone solve this integration for me I don't know how the summation and integration will behave with each other
You have to square the sum first. Note that the $\sqrt{a}$ is common to all terms in $$\sqrt{a}\cos\left(\frac{2\pi(d_{i}-a)}{\lambda} \right)\cdot \sqrt{a}\cos\left(\frac{2\pi(d_{j}-a)}{\lambda} \right),$$ so it can factor out as $a.$ That is, we have $$\int_{x}^{y}a\left[\sum_{i=1}^{N}\cos\left(\frac{2\pi(d_{i}-a)}{\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/527167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Covariance in the errors of random variables I have two computed variables say $x\sim N(\mu_{x}, \sigma_{x})$ and $y\sim N(\mu_y, \sigma_y)$. Additionally, the $\sigma_x$ and $\sigma_y$ are both computed from different types of errors (different components used to compute $\mu_x$ and $\mu_y$). $$\begin{align} \sigma_x ...
Generally speaking, the relation $\sqrt{Cov(A^2,B^2)}=Cov(A,B)$ does not hold. Consider the following counterexample: Let $X\sim U[0,2\pi]$ and $Y=\sin(x), Z=\cos(X)$. You can see here the proof for $Cov(Y,Z)=0$. Now, let's examine $Cov(Y^2,Z^2)$: $$Cov(Y^2,Z^2)=E[(Y^2-E[Y^2])(Z^2-E[Z^2])]=E\left[(\sin^2(X)-\int_0^{2\p...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/550338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of a sum of probabilistic random variables? Suppose we have $\mathbb P(A > x) \leq m$ and $\mathbb P(B > x) \leq m$. What is $\mathbb P(A + B > y)$? I have been looking for a related axiom and not had any luck.
Without any additional assumptions (such as independence), we can say the following: If $x > \frac{1}{2}y$, we can't bound $\mathbb P(A + B > y)$. In fact, for any $p$, we can find $A$ and $B$ such that $\mathbb P(A + B > y) = p$ regardless of the value of $m$. Otherwise, when $x ≤ \frac{1}{2}y$, we can infer that $\ma...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/141534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why isn't variance defined as the difference between every value following each other? This may be a simple question for many but here it is: Why isn't variance defined as the difference between every value following each other instead of the difference to the average of the values? This would be the more logical choic...
Just a complement to the other answers, variance can be computed as the squared difference between terms: $$\begin{align} &\text{Var}(X) = \\ &\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i-x_j\right)^2 = \\ &\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i - \overline x -x_j + \overline x\right)^2 = \\ &\frac{1}{2\cdot ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/225734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 6 }
PDF of $X^2+2aXY+bY^2$ It is my first post on this forum. I am not a mathematician (so excuse me if I don't use the right vocabulary). I have two independent Normal random variables $X$ and $Y$: \begin{aligned} X&\sim N(0,\sigma^{2})\\ Y&\sim N(0,s^{2}) \end{aligned} How can I find the PDF of: $$J=X^2+2aXY+bY^2$$ wher...
First of all, $J$ can be rewritten like this: $$J=\frac{b-a^2}{b} X^2+b\left(\frac{a}{b}X+Y \right)^2$$ This way, you can easily see that $J$ must be non-negative and that $J\ge \frac{b-a^2}{b} X^2$ which restricts what $X$ can be if you know $J$. Now, find the cumulative distribution function: $$P[J \le t]=P\left[\fra...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/507303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why the variance of Maximum Likelihood Estimator(MLE) will be less than Cramer-Rao Lower Bound(CRLB)? Consider this example. Suppose we have three events to happen with probability $p_1=p_2=\frac{1}{2}\sin ^2\theta ,p_3=\cos ^2\theta $ respectively. And we suppose the true value $\theta _0=\frac{\pi}{2}$. Now if we do ...
The first issue you have here is that your likelihood function does not appear to match the description of your sampling mechanism. You say that you only observe events 1 and 2 happen, but if the sample size $n$ is known then this still fixes the number of times that event 3 happens (since $m_1 + m_2 + m_3 = n$). Tak...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/592676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
A trivial question about Covariance I'm just learning about Covariance and encountered something I don't quite understand. Assume we have two random variables X and Y, where the respective joint-probability function assigns equal weights to each event. According to wikipedia the Cov(X, Y) can then be caluculated as fol...
The idea is that the possible outcomes in your sample are $i=1, \ldots, n$, and each outcome $i$ has equal probability $\frac{1}{n}$ (under the probability measure that assigns equal probability to all outcomes that you appear to be using). You have $n$ outcomes, not $n^2$. To somewhat indulge your idea, you could comp...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/266856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How can we apply the rule of stationary distribution to the continuous case of Markov chain? If the Markov chain converged then $$\pi = Q* \pi$$where $ \pi$ is the posterior distribution and $Q$ is the transition distribution(it's a matrix in the discrete case). I tried to apply that on the continuous case of Markov ch...
That stationary distribution is correct. Using the law of total probability, you have: $$\begin{equation} \begin{aligned} p(T_{t+1} = x) &= \int \limits_\mathbb{R} p(X_{t+1} = x | X_t = r) \cdot p(X_t = r) \ dr \\[6pt] &= \int \limits_{-\infty}^\infty \text{N}(x | \phi r, 1) \cdot \text{N} \bigg( r \bigg| 0, \frac{1}...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/423484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finite Population Variance for a Changing Population How does the addition of one unit affect the population variance of a finite population if everything else remains unchanged? What are the conditions such that the new unit leaves the variance unchanged (increases/decreases it)? I was able to find the following paper...
I was unable to find the sample calculations that correspond to the specific problem here (as suggested by Glen_b), but I was able to confirm the following answer with numerical calculations in R at the bottom of this answer. Let $N$ be the initial number of units in the population and $N + 1$ be the number of units in...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/117111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Find the mean and standard error for mean. Have I used correct formulas in this given situation? A university has 807 faculty members. For each faculty member, the number of refereed publications was recorded. This number is not directly available on the database, so requires the investigator to examine each record se...
THE formulas for the descriptive and inference statistics used by you are correct in terms of SRS ((simple random sampling ) and variance-estimation.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/304894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Stationary Distributions of a irreducible Markov chain I was trying to get all Stationary Distributions of the following Markov chain. Intuitively, I would say there are two resulting from splitting op the irreducible Markov chain into two reducible ones. However, I feel this is not mathematically correct. How else wou...
Conditioned on $X_0\in\{0,1\}$ we have from solving \begin{align} \pi_0 &= \frac13\pi_0 + \frac12\pi_1\\ \pi_1 &= \frac23\pi_0 + \frac12\pi_1\\ \pi_0 + \pi_1 &= 1 \end{align} $\pi_0 = \frac37$, $\pi_1=\frac 47$. Conditioned on $X_0\in\{3,4\}$ we have by solving a similar system of equations $\pi_3 = \frac4{13}$, $\pi_4...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/438165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
MSE of the Jackknife Estimator for the Uniform distribution The Jackknife is a resampling method, a predecessor of the Bootstrap, which is useful for estimating the bias and variance of a statistic. This can also be used to apply a "bias correction" to an existing estimator. Given the estimand $\theta$ and an estimator...
It is well known that the order statistics, sampled from a uniform distribution, are Beta-distributed random variables (when properly scaled). $$\frac{X_{(j)}}{\theta} \sim \text{Beta}(j, n+1-j)$$ Using standard properties of the Beta distribution we can obtain the mean and variance of $X_{(n)}$ and $X_{(n-1)}$. Bias \...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/458883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Pdf of the sum of two independent Uniform R.V., but not identical Question. Suppose $X \sim U([1,3])$ and $Y \sim U([1,2] \cup [4,5])$ are two independent random variables (but obviously not identically distributed). Find the pdf of $X + Y$. So far. I'm familiar with the theoretical mechanics to set up a solution. So...
Here is a plot as suggested by comments What I was getting at is it is a bit cumbersome to draw a picture for problems where we have disjoint intervals (see my comment above). It's not bad here, but perhaps we had $X \sim U([1,5])$ and $Y \sim U([1,2] \cup [4,5] \cup [7,8] \cup [10, 11])$. Using @whuber idea: We notic...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/489224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Generating random variables from a given distribution function using inversion sampling Given this distribution function $f(x)$ : $$ f\left(x\right)=\left\{\begin{matrix}x+1,-1\le x\le0\\1-x,0<x\le1\\\end{matrix}\right. $$ Generate random variables using Inverse sampling method in R: here is my attempt : f <- function...
The cumulative distribution function, $F(x)$, is given by $$ F(x) = \int_{-\infty}^x f(t)dt $$ So for, $- 1 \leq x \leq 0$, \begin{align*} F(x) &= \int_{-\infty}^x f(t)dt \\ &= \frac{x^2}{2} +x + \frac{1}{2} \end{align*} and for $0 \leq x \leq 1$, \begin{align*} F(x) &= \int_{-\infty}^0 f(t)dt + \int_0^x f(t)dt \\ & =...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/526178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix representation of the OLS of an AR(1) process, Is there any precise way to express the OLS estimator of the centred error terms $\{u_t\} _{t=1}^{n}$ that follows an AR(1) process? In other words, for \begin{equation} u_t=\rho u_{t-1}+\varepsilon_t,\quad \varepsilon_t\sim N(0,\sigma^2) \end{equation} is there a m...
To facilitate our analysis, we will use the following $(n-1) \times n$ matrices: $$\mathbf{M}_0 \equiv \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 0 \\ \end{bmatrix} \quad \quad ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/539042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Derivation of integrating over many parameters in Neyman-Scott Problem? I am trying to follow the derivation for the variance estimator in the Neyman-Scott problem given in this article. However, I'm not sure how they go from the 2nd to the 3rd line of this derivation. Any help is appreciated, thanks!
Each of the integrals indexed by $i$ in the product has the form $$\int_{\mathbb R} \exp\left[\phi(\mu_i,x_i,y_i,\sigma)\right]\,\mathrm{d}\mu_i.$$ When $\phi$ is linear or quadratic in $\mu_i,$ such integrals have elementary values. The more difficult circumstance is the quadratic. We can succeed with such integrati...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/563452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Bounding the difference between square roots I want to compute the value of $\frac{1}{\sqrt{a + b + c}}$. Say I can observe a and b, but not c. Instead, I can observe d which is a good approximation for c in the sense that $P( |c-d| \leq 0.001 )$ is large (say 95%), and both c and d are known to have $|c| \leq 1, |d|...
Use a Taylor series (or equivalently the Binomial Theorem) to expand around $c$. This is valid provided $|d-c| \lt |a+b+c|$: $$\eqalign{ &\frac{1}{\sqrt{a+b+c}} - \frac{1}{\sqrt{a+b+d}}\\ &= (a+b+c)^{-1/2} - (a+b+c + (d-c))^{-1/2} \\ &= (a+b+c)^{-1/2} - (a+b+c)^{-1/2}\left(1 + \frac{d-c}{a+b+c}\right)^{-1/2} \\ &=...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/20409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find the pdf of Y when pdf of X is given $$ f_{X}(x) = \frac{3}{8}(x+1)^{2} ,\ -1 < x < 1 $$ $$Y = \begin{cases} 1 - X^{2} & X \leq 0,\\ 1- X, & X > 0.\end{cases}$$ I started with : $$ F_{Y}(y) = 1 - P(Y \leq y) $$ $$ = 1 - [P(-(1-y)^\frac {1}{2} < X < (1-y)] $$ From here, I can get $F_{Y}(y)$, and different...
The probability density function of $Y$ can be found by: $$ f_{Y}(y)=\sum_{i}f_{X}(g_{i}^{-1}(y))\left|\frac{dg_{i}^{-1}(y)}{dy}\right|,\quad \mathrm{for}\; y \in \mathcal{S}_{Y} $$ where $g_{i}^{-1}$ denotes the inverse of the transformation function and $\mathcal{S}_{Y}$ the support of $Y$. Let's denote our two trans...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/70709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If f(x) is given, what would be the distribution of Y = 2X + 1? If the random variable X has a continuous distribution with the density $ f(x) = \frac{1}{x^2}\Bbb {1}_{(1, ∞)}(x)$, can you find the distribution of $Y = 2X+1$? My attempt: $CDF(Y)$ $\Rightarrow P(Y\le y)$ $\Rightarrow P(2X + 1 \le y)$ $\Rightarrow ...
Since the transformation function is monotonic, we can find the CDF by using PDF transformation and integrating the transformed PDF. PDF Transformation: $$ f_Y(y) = f_X(g^{-1}(y)) \Bigg|\frac{dg^{-1}}{dy} \Bigg|$$ For this situation, $g^{-1}(y) = \frac{y-1}{2}$, and by substitution: $$f_X(g^{-1}(y)) = \frac{1}{((y-1)/...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/245341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables As a routine exercise, I am trying to find the distribution of $\sqrt{X^2+Y^2}$ where $X$ and $Y$ are independent $ U(0,1)$ random variables. The joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=\mathbf 1_{0<x,y<1}$$ Transforming to polar coor...
$f_z(z)$ : So, for $1\le z<\sqrt 2$, we have $\cos^{-1}\left(\frac{1}{z}\right)\le\theta\le\sin^{-1}\left(\frac{1}{z}\right)$ You can simplify your expressions when you use symmetry and evaluate the expressions for $\theta_{min} < \theta < \frac{\pi}{4}$. Thus, for half of the space and then double the result. The...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/323617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
State-space model with contemporaneous effects I have the following system of equations: $$ \begin{align} y_t^{(1)}&=y_t^{(2)}-x_t+\epsilon_t\\ y_t^{(2)}&=x_t+\nu_t\\ x_t&=\alpha x_{t-1}+u_t \end{align} $$ where $y_t^{(1)}, y_t^{(2)}$ are observed and $x_t$ is not. I'm having some issues putting this into the state-spa...
Substitute the second equation into the first and you have \begin{align} y_t^{(1)}&=y_t^{(2)}-x_t+\epsilon_t\\ &=x_t+\nu_t-x_t+\epsilon_t\\ &=\nu_t+\epsilon_t. \end{align} So it's \begin{align} \begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}&=\begin{pmatrix}0 \\ 1\end{pmatrix}x_t+\begin{pmatrix}1 & 1 \\ 0 & 1\end{pma...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/373080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Understanding KL divergence between two univariate Gaussian distributions I'm trying to understand KL divergence from this post on SE. I am following @ocram's answer, I understand the following : $\int \left[\log( p(x)) - log( q(x)) \right] p(x) dx$ $=\int \left[ -\frac{1}{2} \log(2\pi) - \log(\sigma_1) - \frac{1}{2} \...
$E_1$ is the expectation with respect to the first distribution $p(x)$. Denoting it with $E_p$ would be better, I think. – Monotros I've created this answer from a comment so that this question is answered. Better to have a short answer than no answer at all.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/406221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ In this post Why is sample standard deviation a biased estimator of $\sigma$? the last step is shown as: $$\sigma\left(1-\sqrt\frac{2}{n-1}\frac{\Gamma\frac{n}{2}}{\Gamma\frac{n-1}{2}}\right) = \sigma\left(1-\sqrt\frac{2}{n-1}\frac{...
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control $$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$ as $x \to \infty$. Gautschi's inequality (applied with $s=\frac{1}{2}$) implies $$ 1 - \sqrt{\frac{x+1}{x+\frac{1}{2}}} <1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqr...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/494489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Calculate the consistency of an Estimator I need to determine whether the following estimator $T$ is asymptotically unbiased and consistent for an i.i.d. sample of Gaussian distributions with $X_{i} \sim N(\mu, \sigma)$: \begin{equation*} T = \frac{1}{2} X_{1} + \frac{1}{2n} \sum\limits_{i = 2}^{n} X_{i} \end{equation*...
By definition, a consistent estimator converges in probability to a constant as the sample grows larger. To be explicit, let's subscript $T$ with the sample size. Note that $$\operatorname{Var}(T_n) = \operatorname{Var}\left(\frac{X_1}{2}\right) + \operatorname{Var}\left(\frac{1}{2n}\sum_{i=2}^n X_i\right) \ge \operat...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/495867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fourth moment of arch(1) process I have an ARCH(1) process \begin{align*} Y_t &= \sigma_t \epsilon_t, \\ \sigma_t^2 &= \omega + \alpha Y_{t-1}^2, \end{align*} and I am trying to express the fourth moment $\mathbb{E}[Y_t^4]$ in terms of $\omega$, $\alpha$ and $\mathbb{E}[\epsilon_t^4]$.
For \begin{align*} Y_t = \sigma_t \epsilon_t, \qquad \sigma^2_t = \omega + \alpha Y^2_{t-1}, \qquad \omega>0, \alpha \geq 0, \end{align*} we assume $\sigma_t$ and $\epsilon_t$ to be independent. I also assume standard normality for $\epsilon_t$, so that $E(\epsilon_t^4)=3$. (You will see from the proof what needs to ha...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/550022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Bounding the difference between square roots I want to compute the value of $\frac{1}{\sqrt{a + b + c}}$. Say I can observe a and b, but not c. Instead, I can observe d which is a good approximation for c in the sense that $P( |c-d| \leq 0.001 )$ is large (say 95%), and both c and d are known to have $|c| \leq 1, |d|...
$\frac{1}{\sqrt{a + b + c}} - \frac{1}{\sqrt{a + b + d}} = \frac{\sqrt{a + b + d}-\sqrt{a + b + c}}{\sqrt{a + b + c}\sqrt{a + b + d}} $ $ =\frac{(\sqrt{a + b + d}+\sqrt{a + b + c})(\sqrt{a + b + d}-\sqrt{a + b + c})}{(\sqrt{a + b + d}+\sqrt{a + b + c})\sqrt{a + b + c}\sqrt{a + b + d}}$ $ =\frac{(d-c)}{(\sqrt{a + b + d}...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/20409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Decision Tree Probability - With Back Step For the below decision tree, I can see how the probabilities of each end state are calculated... simply multiply the previous decisions: But for this one below, I'm totally stumped. It seems in my head the chance at resetting back to the first decision is completely negated b...
The probability of arriving at S1 is $$\frac 12\cdot \frac 29 + \frac 12\cdot \left(\frac 49\cdot \frac 12\right)\cdot\frac 29 +\frac 12\cdot \left(\frac 49\cdot \frac 12\right)^2\cdot\frac 29 +\frac 12\cdot \left(\frac 49\cdot \frac 12\right)^3\cdot\frac 29 + \cdots\\ = \frac 19\cdot \left[1 + \left(\frac 29\right) ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/242996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cdf of the joint density $f(x, y) = \frac{3}{2 \pi} \sqrt{1-x^2-y^2}$ $$f(x, y) = \frac{3}{2\pi} \sqrt{1-x^2-y^2}, \quad x^2 + y^2 \leq 1$$ Find the cdf $F(x, y)$. To do this, we need to compute the integral $$ \int_{-1}^{x} \int_{-1}^{y} \frac{3}{2\pi} \sqrt{1-u^2-v^2} dv du .$$ This is where I'm stuck. Converting to...
This problem will be easier to solve if you first try and visualize what the joint pdf looks as a surface above the $x$-$y$ plane in three-dimensional space. Hint: ignoring the scale factor $\frac{3}{2\pi}$, what is the surface defined by $$z = \begin{cases}\sqrt{1 - x^2 - y^2}, & x^2+y^2 \leq 1,\\ 0, &\text{otherwise...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/249345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables As a routine exercise, I am trying to find the distribution of $\sqrt{X^2+Y^2}$ where $X$ and $Y$ are independent $ U(0,1)$ random variables. The joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=\mathbf 1_{0<x,y<1}$$ Transforming to polar coor...
That the pdf is correct can be checked by a simple simulation samps=sqrt(runif(1e5)^2+runif(1e5)^2) hist(samps,prob=TRUE,nclass=143,col="wheat") df=function(x){pi*x/2-2*x*(x>1)*acos(1/(x+(1-x)*(x<1)))} curve(df,add=TRUE,col="sienna",lwd=3) Finding the cdf without the polar change of variables goes through \begin{alig...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/323617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Probability that the absolute value of a normal distribution is greater than another Greatly appreciate anyone that is willing to Help. I am thinking about the question of comparing the absolute value of normal distributions. Given $a > b$, $X$ ~ $N(0,a)$ and $Y$ ~ $N(0,b)$, what is the distribution of $|X| - |Y|$?
It may be shown that $|X| \sim HN(a)$ and $|Y| \sim HN(b)$, where $HN(\cdot)$ represents a half-normal distribution. For completeness, the probability density functions of $|X|$ and $|Y|$ are \begin{eqnarray*} f_{|X|} (x) &=& \frac{\sqrt{2}}{a\sqrt{\pi}} \exp \left(-\frac{x^2}{2a^2}\right), \quad x>0 \\ f_{|Y|} (y) &=...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/560633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to derive the Jensen-Shannon divergence from the f-divergence? The Jensen-Shannon divergence is defined as $$JS(p, q) = \frac{1}{2}\left(KL\left(p||\frac{p+q}{2}\right) + KL\left(q||\frac{p+q}{2}\right) \right).$$ In Wikipedia it says that it can be derived from the f-divergence $$D_f(p||q) = \int_{-\infty}^{\infty...
Few observations: $\rm [I] ~(p. 90)$ defines Jensen-Shannon divergence for $P, Q, ~P\ll Q$ as $$\mathrm{JS}(P,~Q) := D\left(P\bigg \Vert \frac{P+Q}{2}\right)+D\left(Q\bigg \Vert \frac{P+Q}{2}\right)\tag 1\label 1$$ and the associated function to generate $\rm JS(\cdot,\cdot) $ from $D_f(\cdot\Vert\cdot) $ is $$f(x) :=x...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/593928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find mean and variance using characteristic function Consider a random variable with characteristic function $$ \phi(t)=\frac{3\sin(t)}{t^3}-\frac{3\cos(t)}{t^2}, \ \text{when} \ t \neq0 $$ How can I compute the $E(X)$ and $Var(X)$ by using this characteristic function? I'm stuck because if I differentiate I got $\phi'...
In general, if the power-series expansion holds for a characteristic function of random variable $X$, which is the case of this $\varphi(t)$ (because power-series expansions for $\sin t$ and $\cos t$ hold and the negative exponent terms cancelled each other, as will be shown in the derivation below), the moments of $X$...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/605455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Probability of reaching node A from node B in exactly X steps I have a three-node matrix with two edges (A-B and A-C). I would like to determine what the probability is starting from B and ending at C in exactly 100 steps. I have only written out probabilities: P(A|B) = 1 P(B|A) = 0.5 P(A|C) = 1 P(C|A) = 0.5 But there...
After an odd number of steps you must be at A and after an even number of steps you will be in either B or C, each with probability 0.5, therefore after 100 steps the probability of being in C is 0.5 Edit More formally we can define a Markov chain with transition matrix: $$ T = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/87763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables As a routine exercise, I am trying to find the distribution of $\sqrt{X^2+Y^2}$ where $X$ and $Y$ are independent $ U(0,1)$ random variables. The joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=\mathbf 1_{0<x,y<1}$$ Transforming to polar coor...
For $0 \leq z \leq 1$, $P\left(\sqrt{X^2+Y^2} \leq z\right)$ is just the area of the quarter-circle of radius $z$ which is $\frac 14 \pi z^2$. That is, $$\text{For }0 \leq z \leq 1, ~\text{area of quarter-circle} = \frac{\pi z^2}{4} = P\left(\sqrt{X^2+Y^2} \leq z\right).$$ For $1 < z \leq \sqrt{2}$, the region over w...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/323617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Finding the conditional distribution of single sample point given sample mean for $N(\mu, 1)$ Suppose that $X_1, \ldots, X_n$ are iid from $N(\mu, 1)$. Find the conditional distribution of $X_1$ given $\bar{X}_n = \frac{1}{n}\sum^n_{i=1} X_i$. So I know that $\bar{X}_n$ is a sufficient statistic for $\mu$ and $X_1$ i...
Firstly, we need to find the joint distribution of $(X_1, \bar{X})$ (for simplicity, write $\bar{X}$ for $\bar{X}_n$). It is easily seen that \begin{equation} \begin{bmatrix} X_1 \\ \bar{X} \end{bmatrix} = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ \frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n} \end{bmatrix} \begin{bmatr...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/434427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trying to approximate $E[f(X)]$ - Woflram Alpha gives $E[f(X)] \approx \frac{1}{\sqrt{3}}$ but I get $E[f(X)] \approx 0$? Let $X \sim \mathcal{N}(\mu_X,\sigma_X^2) = \mathcal{N}(0,1)$. Let $f(x) = e^{-x^2}$. I want to approximate $E[f(X)]$. Wolfram Alpha gives \begin{align} E[f(X)] \approx \frac{1}{\sqrt{3}}. \end{alig...
There's no need to "approximate" when you can derive the exact value of $\mathbb{E}[f(X)]$ . Let us apply the Law of the Unconscious Statistician (LoTUS) to obtain : \begin{align*} \mathbb{E}[f(X)] &= \int_{-\infty}^{+\infty} e^{-x^2} \cdot \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{x^2}{2}\right)~dx\\ &= 2\int_0^{+\infty}...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/495042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Trouble finding var(ax) So the variance of a 6-sided (1,2,3,4,5,6) die is $291.6$ using the formula: $$ \text{Var}(X) = \frac{(b-a+1)^2}{12} $$ Also, $\text{Var}(10X) = 10^2 \cdot \text{Var}(X)$, so that would mean $\text{Var}(10X) = 291.6$. If I want to find the variance of $10X$, is this not the same as multiplying e...
Let's find a formula that will apply to both your situations. One description that covers them both supposes $X$ is a uniform random variable defined on an arithmetic progression $$x_1, x_2, \ldots, x_n = a, a+d, a+2d,\ldots, a+(n-1)d = b.$$ Thus $x_i=a+(i-1)d$ and each $x_i$ has a probability $1/n.$ By definition $$E...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/504425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$ I have this equality $$(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$$ where $A$ and $B$ are square symmetric matrices. I have done many test of R and Matlab that show that this holds, however I do not know how to prove it.
Note that $$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$$ is the inverse of $$\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $$ if and only if $$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) = \mathbf{I} $$ and $$ \left(\mathb...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/197067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
sample all pairs without repeats Assuming I have a very large number of $K$ colored balls and we know the fraction of each color. If we randomly sample all pairs so that all pairs have two balls of different colors then what is the fraction of pairs with a given color combinations? For example if there are 3 colors wit...
Suppose you have $K$ colours of balls with respective numbers $n_1,...,n_K$, with a total of $n = \sum n_i$ balls. Let $\mathscr{S}$ denote the set of all pairs of distinct balls and let $\mathscr{C}$ denote the set of all pairs of distinct balls of the same colour. Since $\mathscr{C} \subset \mathscr{S}$ the number ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/402971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Correct or not? Mixed Bayes' Rule - Noisy Communication In this problem, we study a simple noisy communication channel. Suppose that $X$ is a binary signal that takes value $−1$ and $1$ with equal probability. This signal $X$ is sent through a noisy communication channel, and the medium of transmission adds an ...
Both answers are correct. The likelihood is defined as $$ p \left(Y \mid X=1 \right) = \frac{1}{\sqrt{2\pi}}\, e^{\frac{-\left(y-1 \right)^2}{2}} $$ Assuming both $X=1$ and $X=-1$ have the same probability, $p(X=1)=\frac{1}{2}$, the posterior is found with Bayes rule as following. $$ \begin{align} p \left(X=1 \mid Y \...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/420744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
calculate probability using joint density function I'm stuck with this question: X,Y are random variables and thier joint density function is: $$f_X,_Y(x,y)=2 ,0<=x<=1, 0<=y<=x$$ Now we define new random variable Z: $$Z=XY^3$$ I need to calculate the value of $$F_Z(0.3)$$ and i'm not so sure which bounds i should integ...
Let $\mathcal A_t= \left \{0 \leq x \leq 1, 0 \leq y \leq x : xy^3 \leq t \right\}$ The probability $\mathbb P(XY^3 \leq t)$ can be seen as a double integral over $\mathcal A_t$: $$ \mathbb P(XY^3 \leq t) = 2 \int_{A_t} dxdy $$ The condition $xy^3 \leq t$ imply that $y \leq \left( \frac{t}{x} \right)^{\frac{1}{3}}$ and...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/545025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of random variables Trying to understand the solution given to this homework problem: Define random variables $X$ and $Y_n$ where $n=1,2\ldots%$ with probability mass functions: $$ f_X(x)=\begin{cases} \frac{1}{2} &\mbox{if } x = -1 \\ \frac{1}{2} &\mbox{if } x = 1 \\ 0 &\mbox{otherwise} \end{cases} and\...
You're told that $$ P(X=1)=P(X=-1)=1/2 \, , $$ and $$ P(Y_n=1)=\frac{1}{2} + \frac{1}{n+1} \;\;\;, \qquad P(Y_n=-1)=\frac{1}{2} - \frac{1}{n+1} \;\;\;, $$ for $n\geq 1$, and you're asked whether or not $Y_n$ converges to $X$ in probability, which means that $$ \lim_{n\to\infty} P(|Y_n-X|\geq \epsilon) = 0 \, , \...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/40701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ Suppose $\phi(\cdot)$ and $\Phi(\cdot)$ are density function and distribution function of the standard normal distribution. How can one calculate the integral: $$\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\...
Here is another solution: We define \begin{align*} I(\gamma) & =\int_{-\infty}^{\infty}\Phi(\xi x+\gamma)\mathcal{N}(x|0,\sigma^{2})dx, \end{align*} which we can evaluate $\gamma=-\xi\mu$ to obtain our desired expression. We know at least one function value of $I(\gamma)$, e.g., $I(0)=0$ due to symmetry. We take the de...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/61080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53", "answer_count": 3, "answer_id": 2 }
Conditional Expectaction 3 variables Suppose $X,Y$ and $Z$ are multivariate normal with means and full covariance matrix. The conditional expectation $E(X | Y)$ is well know. What is the conditional expectation of $E(X | Y,Z)$ if $Y$ and $Z$ (and $X$) are correlated? Standard textbooks only seem to cover the case when ...
If $\mathbf{x} \in \mathbb{R}^n, \mathbf{y} \in \mathbb{R}^m$ are jointly Gaussian, \begin{align} \begin{pmatrix}\mathbf{x} \\ \mathbf{y}\end{pmatrix} \sim \mathcal{N}\left( \begin{pmatrix} \mathbf{a} \\ \mathbf{b} \end{pmatrix}, \begin{pmatrix} \mathbf{A} & \mathbf{C} \\ \mathbf{C}^\top & \mathbf{B} \end{pmatrix} \r...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/68329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to calculate $E[X^2]$ for a die roll? Apparently: $$ E[X^2] = 1^2 \cdot \frac{1}{6} + 2^2 \cdot \frac{1}{6} + 3^2\cdot\frac{1}{6}+4^2\cdot\frac{1}{6}+5^2\cdot\frac{1}{6}+6^2\cdot\frac{1}{6} $$ where $X$ is the result of a die roll. How come this expansion?
There are various ways to justify it. For example, it follows from the definition of expectation and the law of the unconscious statistician. Or consider the case $Y=X^2$ and computing $E(Y)$.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/132996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the Probability of a random variable with countably infinite values So I was working on a problem where I am provided with a PMF $p_X(k)= c/3^k$ for $k=1,2,3....$ I was able to calculate $c$ using the basic property of PMF and it came to be 2. I am not able to solve the next part which states that "Find $P(X\ge...
Consider, $$S=\frac{2}{3}+\cdots+\frac{2}{3^{k-2}}+\frac{2}{3^{k-1}}$$ multiply $S$ by $\frac{1}{3}.$ Thus, $$\frac{1}{3}S=\frac{2}{3^{2}}+\cdots+\frac{2}{3^{k-1}}+\frac{2}{3^{k}}.$$ Subtract $S$ of $\frac{1}{3}S,$ $$\frac{2}{3}S=\frac{2}{3}-\frac{2}{3^{k}}.$$ Thus, $$S=1-\frac{1}{3^{k-1}}.$$ Now, if $k=1, P(X\geq k)...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/262359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limiting Sum of i.i.d. Gamma variates Let $X_1,X_2,\ldots$ be a sequence of independently and identically distributed random variables with the probability density function; $$ f(x) = \left\{ \begin{array}{ll} \frac{1}{2}x^2 e^{-x} & \mbox{if $x>0$};\\ 0 & \mbox{otherwise}.\end{array} \right. $$ Sho...
As an alternative to whuber's excellent answer, I will try to derive the exact limit of the probability in question. One of the properties of the gamma distribution is that sums of independent gamma random variables with the same rate/scale parameter are also gamma random variables with shape equal to the sum of the s...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/342704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Integrating out parameter with improper prior I got this problem while I was reading the book "Machine Learning: A Probabilistic Perspective" by Kevin Murphy. It is in section 7.6.1 of the book. Assume the likelihood is given by $$ \begin{split} p(\mathbf{y}|\mathbf{X},\mathbf{w},\mu,\sigma^2) & = \mathcal{N}(\mathbf{...
This calculation assumes that the columns of the design matrix have been centred, so that: $$(\mathbf{Xw}) \cdot \mathbf{1}_N = \mathbf{w}^\text{T} \mathbf{X}^\text{T} \mathbf{1}_N = \mathbf{w}^\text{T} \mathbf{0} = 0.$$ With this restriction you can rewrite the quadratic form as a quadratic in $\mu$ plus a term that d...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/392584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Cramer-Rao Lower Bound for the estimation of Pearson correlation Given a bivariate Gaussian distribution $\mathcal{N}\left(0,\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\right)$, I am looking for information on the distribution of $\hat{\rho}$ when estimating $\rho$ on finite sample with the Pearson estima...
I did the computations on my own, but I find something different: We consider the set of $2 \times 2$ correlation matrices $C = \begin{pmatrix} 1 & \theta \\ \theta & 1 \end{pmatrix}$ parameterized by $\theta$. Let $x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \in \mathbf{R}^2$. $f(x;\theta) = \frac{1}{...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/195542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$ Let $X_1$, $X_2$, $\cdots$, $X_d \sim \mathcal{N}(0, 1)$ and be independent. What is the expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$? It is easy to find $\mathbb{E}\left(\frac{X_1^2}{X_1^2 + \cdots + X_d^2}\right) = \frac{1}{d}$ by symmetry. But ...
The distribution of $X_i^2$ is chi-square (and also a special case of gamma). The distribution of $\frac{X_1^2}{X_1^2 + \cdots + X_d^2}$ is thereby beta. The expectation of the square of a beta isn't difficult.
{ "language": "en", "url": "https://stats.stackexchange.com/questions/222915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
$(2Y-1)\sqrt X\sim\mathcal N(0,1)$ when $X\sim\chi^2_{n-1}$ and $Y\sim\text{Beta}\left(\frac{n}{2}-1,\frac{n}{2}-1\right)$ independently $X$ and $Y$ are independently distributed random variables where $X\sim\chi^2_{(n-1)}$ and $Y\sim\text{Beta}\left(\frac{n}{2}-1,\frac{n}{2}-1\right)$. What is the distribution of $Z=...
As user @Chaconne has already done, I was able to provide an algebraic proof with this particular transformation. I have not skipped any details. (We already have $n>2$ for the density of $Y$ to be valid). Let us consider the transformation $(X,Y)\mapsto (U,V)$ such that $U=(2Y-1)\sqrt{X}$ and $V=X$. This implies $x=v...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/327499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
Testing whether $X\sim\mathsf N(0,1)$ against the alternative that $f(x) =\frac{2}{\Gamma(1/4)}\text{exp}(−x^4)\text{ }I_{(-\infty,\infty)}(x)$ Consider the most powerful test of the null hypothesis that $X$ is a standard normal random variable against the alternative that $X$ is a random variable having pdf $$f(...
The test will reject $H_0$ for sufficiently large values of the ratio $$\begin{align*} \frac{2}{\Gamma\left(\frac{1}{4}\right)}\frac{\text{exp}\left(-x^4\right)}{\frac{1}{\sqrt{2\pi}}\text{exp}\left(-\frac{x^2}{2}\right)} &=\frac{2\sqrt{2\pi}}{\Gamma\left(\frac{1}{4}\right)}\text{exp}\left(-x^4+\frac{1}{2}x^2\right)\\\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/379808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Probability of Weibull RV conditional on another Weibull RV Let $X, Y$ be independent Weibull distributed random variables and $x>0$ a constant. Is there a closed form solution to calculating the probability $$P(X<x|X<Y)?$$ Or maybe a way to approximate this probability?
Letting $a_1,b_1$ and $a_2,b_2$ denote the parameters of $X$ and $Y$, and assuming that $b_1=b_2=b$, \begin{align} P(X<x \cap X<Y) &=\int_0^x \int_x^\infty f_X(x)f_Y(y) dy\,dx \\&=\int_0^x f_X(x)(1-F_Y(x))dx \\&=\int_0^x a_1 b x^{b-1}e^{-a_1 x^b - a_2 x^b} dx \\&=\frac{a_1}{a_1+a_2}\int_0^{(a_1+a_2)x^b}e^{-u}du \\&=\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/513466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Calculating $\operatorname{var} \left(\frac{X_1-\bar{X}}{S}\right)$ Suppose $X_1,X_2,\ldots, X_n$ are random variables distributed independently as $N(\theta , \sigma^2)$. define $$S^2=\frac{1}{n-1}\sum_{i=1}^{n} (X_i-\bar{X})^2 ,\qquad \bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_i\,.$$ Take $n=10$. How can $\operatorname{va...
I think it is possible to arrive at an integral representation of $\text{Var}[\frac{X_1-\bar{X}}{S}]$. First, let us express the sample mean $\bar{X}$ and the sample variance $S^2$ in terms of their counterparts for the observations other than $X_1$: \begin{equation*} \bar{X}_* = \frac{1}{n-1}(X_2+\ldots+ X_n) \quad\...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/168306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$ Let $X_1$, $X_2$, $\cdots$, $X_d \sim \mathcal{N}(0, 1)$ and be independent. What is the expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$? It is easy to find $\mathbb{E}\left(\frac{X_1^2}{X_1^2 + \cdots + X_d^2}\right) = \frac{1}{d}$ by symmetry. But ...
This answer expands @Glen_b's answer. Fact 1: If $X_1$, $X_2$, $\cdots$, $X_n$ are independent standard normal distribution random variables, then the sum of their squares has the chi-squared distribution with $n$ degrees of freedom. In other words, $$ X_1^2 + \cdots + X_n^2 \sim \chi^2(n) $$ Therefore, $X_1^2 \si...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/222915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
how to find expected value? The random variables X and Y have joint probability function $p(x,y)$ for $x = 0,1$ and $y = 0,1,2$. Suppose $3p(1,1) = p(1,2)$, and $p(1,1)$ maximizes the variance of $XY$. Calculate the probability that $X$ or $Y$ is $0$. Solution: Let $Z = XY$. Let $a, b$, and $c$ be the probabilities...
\begin{align}\mathbb{E}(Z)&=0\cdot P(Z=0)+1\cdot P(Z=1)+2\cdot P(Z=2)\\ &=1\cdot P(Z=1)+2\cdot P(Z=2)\\ &=1\cdot P(X=1,Y=1)+2\cdot P(X=1,Y=2)\\ &=b+2c \end{align}
{ "language": "en", "url": "https://stats.stackexchange.com/questions/313379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Coin Tossings probability I want to find the probability that in ten tossings a coin falls heads at least five times in succession. Is there any formula to compute this probability? Answer provided is $\frac{7}{2^6}$
Corrected answer after Orangetree pointed out I forgot to take into account events were not mutually exclusive. You need to think about how many different coin tossing sequences give at least $5$ consecutive heads, and how many coin tossing sequences there are in total, and then take the ratio of the two. Clearly there...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/369157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Joint distribution of X and Y bernoulli random variables A box contains two coins: a regular coin and a biased coin with $P(H)=\frac23$. I choose a coin at random and toss it once. I define the random variable X as a Bernoulli random variable associated with this coin toss, i.e., X=1 if the result of the coin toss is h...
The joint pmf can be described by a 2-by-2 contingency table that shows the probabilities of getting $X=1$ and $Y=1$, $X=1$ and $Y=0$, $X=0$ and $Y=1$, $X=0$ and $Y=0$. So you'll have: $X=0$ $X=1$ $Y=0$ $\frac{1}{2}\cdot\frac{1}{2}\cdot\frac{1}{3}+\frac{1}{2}\cdot\frac{1}{3}\cdot\frac{1}{2}=\frac{1}{6}$ $\frac...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/592258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A three dice roll question I got this question from an interview. A and B are playing a game of dice as follows. A throws two dice and B throws a single die. A wins if the maximum of the two numbers is greater than the throw of B. What is the probability for A to win the game? My solution. If $(X,Y)$ are the two random...
I will take a less formal approach, in order to illustrate my thinking. My first instinct was to visualize the usual $6 \times 6$ array of outcomes $(X,Y)$ of $A$'s dice rolls, and looking at when the larger of the two values is less than or equal to some value: $$\begin{array}{cccccc} (1,1) & (2,1) & (3,1) & (4,1) & (...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/600294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 9, "answer_id": 1 }
How to compute the standard error of the mean of an AR(1) process? I try to compute the standard error of the mean for a demeaned AR(1) process $x_{t+1} = \rho x_t + \varepsilon_{t+1} =\sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t+1-i}$ Here is what I did: $$ \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} ...
Well actually when you take the following \begin{align*} Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\ \end{align*} It is easier to derive an implicit value rather than an explicit value in this case..your answer and mine are the same ..it's just that yours is a bit more difficult to ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/40585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
how can calculate $E(X^Y|X+Y=1)$ let $X,Y$ are two independent random variables Bernoulli with probability $P$. how can calculate $E(X^Y|X+Y=1)$
Let's calculate all outcomes for the $X^Y$: * *$X=0$, $Y=0$ $\Rightarrow X^Y= 0^0 = 1$, $P=(1-p)^2$ *$X=0$, $Y=1$ $\Rightarrow X^Y= 0^1 = 0$, $P=p(1-p)$ *$X=1$, $Y=0$ $\Rightarrow X^Y= 1^0 = 1$, $P=p(1-p)$ *$X=1$, $Y=1$ $\Rightarrow X^Y= 1^1 = 1$, $P=p^2$ Condition $X+Y=1$ means we consider only the two equally l...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/86790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Tossing 2 coins, distribution We are tossing 2 fair coins once. Then for the next toss we are only tossing the coin(s) that came up heads before. Let $X$ be the total number of heads. The question is $EX$ and the distribution of $X$ I tried to calculate the expected values for small $X=x$-es but it gets really complic...
Start by considering each coin in isolation, and the question becomes easier. Let $Y$ denote the number of heads for the first coin, and $Z$ denote the number of heads for the second coin, so $X=Y+Z$. $Y$ and $Z$ are identically distributed so let's just consider $Y$. First, we know that if $Y=y$, then the first coin ...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/119775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Normal Distribution Puzzle/Riddle Some time in the future, a lottery takes place, with winning number N. 3 of your friends from the future, John, Joe, and James, provide you with guesses on the number N. John's guess a is randomly selected from a gaussian distribution centered at N with stdev x; Joe's guess b is random...
You can calculate and maximize the likelihood of N given a,b,c, with x,y,z being fixed. The Likelihood of a value of N (the probability of sampling a,b,c given that the mean is N) is: $LL_{a,b,c}(N) = Pr(a | x,N) \cdot Pr(b | y,N) \cdot Pr(c | z,N)$ With the distributions being independent and Gaussian, this is $LL...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/442888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Expected rolls to roll every number on a dice an odd number of times Our family has recently learned how to play a simple game called 'Oh Dear'. Each player has six playing cards (Ace,2,3,4,5,6) turned face-up, and take turns to roll the dice. Whatever number the dice rolls, the corresponding card is turned over. The w...
I think I've found the answer for the single player case: If we write $e_{i}$ for the expected remaining length of the game if $i$ cards are facedown, then we can work out that: (i). $e_{5} = \frac{1}{6}(1) + \frac{5}{6}(e_{4} + 1)$ (ii). $e_{4} = \frac{2}{6}(e_{5} + 1) + \frac{4}{6}(e_{3} + 1)$ (iii). $e_{3} = \frac{3...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/473444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
Deriving posterior from a single observation z from a normal distribution (ESL book) I am reading the book The Elements of Statistical Learning by Hastie, Tibshirani and Friedman. On page 271 the authors derive a posterior distribution from a single observation $z\sim N(\theta, 1)$, where the prior of $\theta$ is speci...
Since we're looking for the pdf of $\theta$, we're only concerned with terms that include it. \begin{align} \Pr\left(\theta |\textbf{Z}\right) &\propto \Pr\left(\textbf{Z} \mid \theta\right) \Pr(\theta) \\ &\propto \exp\left(-\frac{1}{2}(z-\theta)^2 -\frac{1}{2\tau}\theta^2 \right) \\ &= \exp\left(-\frac{1}{2}\left((1...
{ "language": "en", "url": "https://stats.stackexchange.com/questions/501858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }