Может кто-нибудь помочь?Я делал глубокие уроки из deeplearning.ai Я на 2-й неделе курса 1 Моя функция распространения следующая: Прямое распространение:
Вы получаете X Вы вычисляете A = σ (wTX + b) = (a (1), а (2), ..., а (м-1), (м)) а = σ (WTX + Ь) = (а (1), (2), ..., а (m − 1), a (m)) Вы вычисляете функцию стоимости: J = −1m∑mi = 1y (i) log (a (i)) + (1 − y (i)) log (1-a (i))) J = −1m∑i = 1my (i) log (a (i)) + (1 − y (i)) log (1-a (i))
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot((w.T,X)+b)) # compute activation
cost = -1/m*np.sum(Y*np.log(A)+(1-Y)*np.log(1-A), axis=1,keepdims=True) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = 1/m*dot((X,(A-Y).T))
db = 1/m*np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
и
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
однако я получаю следующую ошибку
TypeError Traceback (most recent call last)
----> 3 grads, cost = propagate(w, b, X, Y)
---> 26 A = sigmoid(np.dot((w.T,X)+b)) # compute activation
TypeError: can only concatenate tuple (not "float") to tuple
как решить?моя сигмовидная функция работает нормально ..