Я попробовал смесь модели Гаусса и столкнулся с несколькими проблемами.Я вставил весь код в ideone.Код по адресу: https://ideone.com/dNYtZ2
Когда я пытаюсь запустить fitMixGauss (data, k)), я получаю единственную матричную ошибку из функции, приведенной ниже.
def fitMixGauss(data, k):
"""
Estimate a k MoG model that would fit the data. Incremently plots the outcome.
Keyword arguments:
data -- d by n matrix containing data points.
k -- scalar representing the number of gaussians to use in the MoG model.
Returns:
mixGaussEst -- dict containing the estimated MoG parameters.
"""
# MAIN E-M ROUTINE
# In the E-M algorithm, we calculate a complete posterior distribution over
# the (nData) hidden variables in the E-Step.
# In the M-Step, we update the parameters of the Gaussians (mean, cov, w).
nDims, nData = data.shape
postHidden = np.zeros(shape=(k, nData))
# we will initialize the values to random values
mixGaussEst = dict()
mixGaussEst['d'] = nDims
mixGaussEst['k'] = k
mixGaussEst['weight'] = (1 / k) * np.ones(shape=(k))
mixGaussEst['mean'] = 2 * np.random.randn(nDims, k)
mixGaussEst['cov'] = np.zeros(shape=(nDims, nDims, k))
for cGauss in range(k):
mixGaussEst['cov'][:, :, cGauss] = 2.5 + 1.5 * np.random.uniform() * np.eye(nDims)
# calculate current likelihood
# TO DO - fill in this routine
logLike = getMixGaussLogLike(data, mixGaussEst)
print('Log Likelihood Iter 0 : {:4.3f}\n'.format(logLike))
nIter = 30;
logLikeVec = np.zeros(shape=(2 * nIter))
boundVec = np.zeros(shape=(2 * nIter))
fig, ax = plt.subplots(1, 1)
for cIter in range(nIter):
# ===================== =====================
# Expectation step
# ===================== =====================
curCov = mixGaussEst['cov']
curWeight = mixGaussEst['weight']
curMean = mixGaussEst['mean']
num= np.zeros(shape=(k,nData))
for cData in range(nData):
# TO DO (g) : fill in column of 'hidden' - calculate posterior probability that
# this data point came from each of the Gaussians
# replace this:
thisData = data[:,cData]
#for c in range(k):
# num[c] = mixGaussEst['weight'][c] * (1/((2*np.pi)**(nDims)*np.linalg.det(mixGaussEst['cov'][:,:,c]))**(1/2))*np.exp(-0.5*(np.transpose(thisData-mixGaussEst['mean'][:,c])))@np.linalg.inv(mixGaussEst['cov'][:,:,c])@(thisData-mixGaussEst['mean'][:,c])
thisdata = data[:,cData];
denominatorExp = 0
for j in range(k):
mu = curMean[:,j]
sigma = curCov[:,:,j]
curNorm = (1/((2*np.pi)**(nDims)*np.linalg.det(sigma))**(1/2))*np.exp(-0.5*(np.transpose(thisData-mu)))@np.linalg.inv(sigma)@(mu)
num[j,cData] = curWeight[j]*curNorm
denominatorExp = denominatorExp + num[j,cData]
postHidden[:, cData] = num[:,cData]/denominatorExp
# ===================== =====================
# Maximization Step
# ===================== =====================
# for each constituent Gaussian
for cGauss in range(k):
# TO DO (h): Update weighting parameters mixGauss.weight based on the total
# posterior probability associated with each Gaussian. Replace this:
#mixGaussEst['weight'][cGauss] = mixGaussEst['weight'][cGauss]
sum_Kth_Gauss_Resp = np.sum(postHidden[cGauss,:])
mixGaussEst['weight'][cGauss] = sum_Kth_Gauss_Resp /np.sum(postHidden)
#mixGaussEst['weight'][cGauss] = np.sum(postHidden[cGauss,:])/sum(sum(postHidden[:,:]));
# TO DO (i): Update mean parameters mixGauss.mean by weighted average
# where weights are given by posterior probability associated with
# Gaussian. Replace this:
#mixGaussEst['mean'][:,cGauss] = mixGaussEst['mean'][:,cGauss]
numerator = 0
for j in range(nData):
numerator = numerator + postHidden[cGauss,j]*data[:,j]
numerator = np.dot( postHidden[cGauss,:],data[0,:])
mixGaussEst['mean'][:,cGauss] = numerator / sum_Kth_Gauss_Resp
# TO DO (j): Update covarance parameter based on weighted average of
# square distance from update mean, where weights are given by
# posterior probability associated with Gaussian
#mixGaussEst['cov'][:,:,cGauss] = mixGaussEst['cov'][:,:,cGauss]
muMatrix = mixGaussEst['mean'][:,cGauss]
muMatrix = muMatrix.reshape((2,1))
numerator = 0
for j in range(nData):
kk=data[:,j]
kk.reshape((2,1))
numerator_i = postHidden[cGauss,j]*(kk-muMatrix)@np.transpose(kk-muMatrix)
numerator = numerator + numerator_i
mixGaussEst['cov'][:,:,cGauss] = numerator /sum_Kth_Gauss_Resp
# draw the new solution
drawEMData2d(data, mixGaussEst)
time.sleep(0.7)
fig.canvas.draw()
# calculate the log likelihood
logLike = getMixGaussLogLike(data, mixGaussEst)
print('Log Likelihood After Iter {} : {:4.3f}\n'.format(cIter, logLike))
return mixGaussEst
Лог вероятности даетnan и почему весь код (в ideone) в конце выдает единственную матричную ошибку.