KNN Regression results in zero MSE on training set (sklearn)










1















Using sklearn and trying to evaluate a KNN regression function with the below code:



def cross_validate(X,y,n_neighbors, test_size=0.20):
training_mses =
test_mses =

n = X.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = np.arange(n)
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]


X_test, y_test = X[test_indices], y[test_indices]
X_train,y_train = X[training_indices], y[training_indices]

knn = neighbors.KNeighborsRegressor(n_neighbors=n_neighbors, weights = "distance",
algorithm = 'brute')
model = knn.fit(X_train,y_train)
y_hat = model.predict( X_train)
training_mse = mse( y_train - y_hat)

model2 = knn.fit(X_test,y_test)
y_hat = model2.predict( X_test)
test_mse = mse( y_test - y_hat)

return training_mse, test_mse


I did something similar with linear regression. The difference I have found is that when I run it on KNN regression, the training_mse and test_mse are both 0. If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero. But I just don't believe that the fitted values for the training and test set are the same as the observed values. What am I doing wrong? The function I was trying to emulate is below and gives non-zero values for mse:



def cross_validate( formula, data, test_size=0.20):
training_mses =
test_mses =

n = data.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = deepcopy( data.index).values
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]

test_set = data.ix[ test_indices]
training_set = data.ix[ training_indices]

y, X = patsy.dmatrices( formula, training_set, return_type="matrix")
model = linear.LinearRegression( fit_intercept=False).fit( X, y)
y_hat = model.predict( X)

training_mse = mse( y - y_hat)

y, X = patsy.dmatrices( formula, test_set, return_type="matrix")
y_hat = model.predict( X)

test_mse = mse( y - y_hat)

return training_mse, test_mse









share|improve this question



















  • 2





    "If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero". Here are you talking about KNN or LinearRegression? If KNN then it makes sense that for training data the MSE is 0 for obvious reasons but not for testing data

    – Vivek Kumar
    Nov 15 '18 at 9:19






  • 1





    Hey Vivek, thanks for the reply. I am talking about the KNN. Why does it make sense? I think there is something I don't understand about the differences between the two.

    – Sean Mahoney
    Nov 15 '18 at 11:10






  • 2





    When you do weights = "distance" in KNN, then during prediction time, the training samples will get 0 distance to the corresponding samples with which the KNN was trained (called fit()) because thats the same data, and hence, exact same values will be returned. So MSE will be 0.

    – Vivek Kumar
    Nov 15 '18 at 11:12











  • Thanks Vivek. When I take away that term I do indeed get non-zero errors.

    – Sean Mahoney
    Nov 15 '18 at 22:45















1















Using sklearn and trying to evaluate a KNN regression function with the below code:



def cross_validate(X,y,n_neighbors, test_size=0.20):
training_mses =
test_mses =

n = X.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = np.arange(n)
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]


X_test, y_test = X[test_indices], y[test_indices]
X_train,y_train = X[training_indices], y[training_indices]

knn = neighbors.KNeighborsRegressor(n_neighbors=n_neighbors, weights = "distance",
algorithm = 'brute')
model = knn.fit(X_train,y_train)
y_hat = model.predict( X_train)
training_mse = mse( y_train - y_hat)

model2 = knn.fit(X_test,y_test)
y_hat = model2.predict( X_test)
test_mse = mse( y_test - y_hat)

return training_mse, test_mse


I did something similar with linear regression. The difference I have found is that when I run it on KNN regression, the training_mse and test_mse are both 0. If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero. But I just don't believe that the fitted values for the training and test set are the same as the observed values. What am I doing wrong? The function I was trying to emulate is below and gives non-zero values for mse:



def cross_validate( formula, data, test_size=0.20):
training_mses =
test_mses =

n = data.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = deepcopy( data.index).values
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]

test_set = data.ix[ test_indices]
training_set = data.ix[ training_indices]

y, X = patsy.dmatrices( formula, training_set, return_type="matrix")
model = linear.LinearRegression( fit_intercept=False).fit( X, y)
y_hat = model.predict( X)

training_mse = mse( y - y_hat)

y, X = patsy.dmatrices( formula, test_set, return_type="matrix")
y_hat = model.predict( X)

test_mse = mse( y - y_hat)

return training_mse, test_mse









share|improve this question



















  • 2





    "If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero". Here are you talking about KNN or LinearRegression? If KNN then it makes sense that for training data the MSE is 0 for obvious reasons but not for testing data

    – Vivek Kumar
    Nov 15 '18 at 9:19






  • 1





    Hey Vivek, thanks for the reply. I am talking about the KNN. Why does it make sense? I think there is something I don't understand about the differences between the two.

    – Sean Mahoney
    Nov 15 '18 at 11:10






  • 2





    When you do weights = "distance" in KNN, then during prediction time, the training samples will get 0 distance to the corresponding samples with which the KNN was trained (called fit()) because thats the same data, and hence, exact same values will be returned. So MSE will be 0.

    – Vivek Kumar
    Nov 15 '18 at 11:12











  • Thanks Vivek. When I take away that term I do indeed get non-zero errors.

    – Sean Mahoney
    Nov 15 '18 at 22:45













1












1








1








Using sklearn and trying to evaluate a KNN regression function with the below code:



def cross_validate(X,y,n_neighbors, test_size=0.20):
training_mses =
test_mses =

n = X.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = np.arange(n)
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]


X_test, y_test = X[test_indices], y[test_indices]
X_train,y_train = X[training_indices], y[training_indices]

knn = neighbors.KNeighborsRegressor(n_neighbors=n_neighbors, weights = "distance",
algorithm = 'brute')
model = knn.fit(X_train,y_train)
y_hat = model.predict( X_train)
training_mse = mse( y_train - y_hat)

model2 = knn.fit(X_test,y_test)
y_hat = model2.predict( X_test)
test_mse = mse( y_test - y_hat)

return training_mse, test_mse


I did something similar with linear regression. The difference I have found is that when I run it on KNN regression, the training_mse and test_mse are both 0. If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero. But I just don't believe that the fitted values for the training and test set are the same as the observed values. What am I doing wrong? The function I was trying to emulate is below and gives non-zero values for mse:



def cross_validate( formula, data, test_size=0.20):
training_mses =
test_mses =

n = data.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = deepcopy( data.index).values
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]

test_set = data.ix[ test_indices]
training_set = data.ix[ training_indices]

y, X = patsy.dmatrices( formula, training_set, return_type="matrix")
model = linear.LinearRegression( fit_intercept=False).fit( X, y)
y_hat = model.predict( X)

training_mse = mse( y - y_hat)

y, X = patsy.dmatrices( formula, test_set, return_type="matrix")
y_hat = model.predict( X)

test_mse = mse( y - y_hat)

return training_mse, test_mse









share|improve this question
















Using sklearn and trying to evaluate a KNN regression function with the below code:



def cross_validate(X,y,n_neighbors, test_size=0.20):
training_mses =
test_mses =

n = X.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = np.arange(n)
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]


X_test, y_test = X[test_indices], y[test_indices]
X_train,y_train = X[training_indices], y[training_indices]

knn = neighbors.KNeighborsRegressor(n_neighbors=n_neighbors, weights = "distance",
algorithm = 'brute')
model = knn.fit(X_train,y_train)
y_hat = model.predict( X_train)
training_mse = mse( y_train - y_hat)

model2 = knn.fit(X_test,y_test)
y_hat = model2.predict( X_test)
test_mse = mse( y_test - y_hat)

return training_mse, test_mse


I did something similar with linear regression. The difference I have found is that when I run it on KNN regression, the training_mse and test_mse are both 0. If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero. But I just don't believe that the fitted values for the training and test set are the same as the observed values. What am I doing wrong? The function I was trying to emulate is below and gives non-zero values for mse:



def cross_validate( formula, data, test_size=0.20):
training_mses =
test_mses =

n = data.shape[ 0]
test_n = int( np.round( test_size * n, 0))

indices = deepcopy( data.index).values
random.shuffle( indices)

test_indices = indices[ 0:test_n]
training_indices = indices[test_n:]

test_set = data.ix[ test_indices]
training_set = data.ix[ training_indices]

y, X = patsy.dmatrices( formula, training_set, return_type="matrix")
model = linear.LinearRegression( fit_intercept=False).fit( X, y)
y_hat = model.predict( X)

training_mse = mse( y - y_hat)

y, X = patsy.dmatrices( formula, test_set, return_type="matrix")
y_hat = model.predict( X)

test_mse = mse( y - y_hat)

return training_mse, test_mse






python scikit-learn regression knn






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 31 '18 at 8:16









Rohan Nadagouda

265313




265313










asked Nov 15 '18 at 2:16









Sean MahoneySean Mahoney

235




235







  • 2





    "If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero". Here are you talking about KNN or LinearRegression? If KNN then it makes sense that for training data the MSE is 0 for obvious reasons but not for testing data

    – Vivek Kumar
    Nov 15 '18 at 9:19






  • 1





    Hey Vivek, thanks for the reply. I am talking about the KNN. Why does it make sense? I think there is something I don't understand about the differences between the two.

    – Sean Mahoney
    Nov 15 '18 at 11:10






  • 2





    When you do weights = "distance" in KNN, then during prediction time, the training samples will get 0 distance to the corresponding samples with which the KNN was trained (called fit()) because thats the same data, and hence, exact same values will be returned. So MSE will be 0.

    – Vivek Kumar
    Nov 15 '18 at 11:12











  • Thanks Vivek. When I take away that term I do indeed get non-zero errors.

    – Sean Mahoney
    Nov 15 '18 at 22:45












  • 2





    "If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero". Here are you talking about KNN or LinearRegression? If KNN then it makes sense that for training data the MSE is 0 for obvious reasons but not for testing data

    – Vivek Kumar
    Nov 15 '18 at 9:19






  • 1





    Hey Vivek, thanks for the reply. I am talking about the KNN. Why does it make sense? I think there is something I don't understand about the differences between the two.

    – Sean Mahoney
    Nov 15 '18 at 11:10






  • 2





    When you do weights = "distance" in KNN, then during prediction time, the training samples will get 0 distance to the corresponding samples with which the KNN was trained (called fit()) because thats the same data, and hence, exact same values will be returned. So MSE will be 0.

    – Vivek Kumar
    Nov 15 '18 at 11:12











  • Thanks Vivek. When I take away that term I do indeed get non-zero errors.

    – Sean Mahoney
    Nov 15 '18 at 22:45







2




2





"If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero". Here are you talking about KNN or LinearRegression? If KNN then it makes sense that for training data the MSE is 0 for obvious reasons but not for testing data

– Vivek Kumar
Nov 15 '18 at 9:19





"If I use the test data on the model fitted with the training set, it gives me an mse value that is non-zero". Here are you talking about KNN or LinearRegression? If KNN then it makes sense that for training data the MSE is 0 for obvious reasons but not for testing data

– Vivek Kumar
Nov 15 '18 at 9:19




1




1





Hey Vivek, thanks for the reply. I am talking about the KNN. Why does it make sense? I think there is something I don't understand about the differences between the two.

– Sean Mahoney
Nov 15 '18 at 11:10





Hey Vivek, thanks for the reply. I am talking about the KNN. Why does it make sense? I think there is something I don't understand about the differences between the two.

– Sean Mahoney
Nov 15 '18 at 11:10




2




2





When you do weights = "distance" in KNN, then during prediction time, the training samples will get 0 distance to the corresponding samples with which the KNN was trained (called fit()) because thats the same data, and hence, exact same values will be returned. So MSE will be 0.

– Vivek Kumar
Nov 15 '18 at 11:12





When you do weights = "distance" in KNN, then during prediction time, the training samples will get 0 distance to the corresponding samples with which the KNN was trained (called fit()) because thats the same data, and hence, exact same values will be returned. So MSE will be 0.

– Vivek Kumar
Nov 15 '18 at 11:12













Thanks Vivek. When I take away that term I do indeed get non-zero errors.

– Sean Mahoney
Nov 15 '18 at 22:45





Thanks Vivek. When I take away that term I do indeed get non-zero errors.

– Sean Mahoney
Nov 15 '18 at 22:45












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53311481%2fknn-regression-results-in-zero-mse-on-training-set-sklearn%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53311481%2fknn-regression-results-in-zero-mse-on-training-set-sklearn%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Top Tejano songwriter Luis Silva dead of heart attack at 64

政党

天津地下鉄3号線