• Register
0 votes
2.6k views

Problem :

I am new to RandomForest model. While predicting my test data using the RandomForest model I am often facing below ValueError.

Input contains nan, infinity or a value too large for dtype('float64')
I have spent more than two days on the above error but I am unable to fix above error. Can somebody help me in fixing above error?
2.3k points

Please log in or register to answer this question.

2 Answers

0 votes

Solution :

This had bothered me too in the past. You can use below ways to fix your error.
Mostly by removing all the infinite and null values will solve your problem.

To remove all infinite values you can follow below methods.

df.replace([np.inf, -np.inf], np.nan, inplace=True)
It will remove all the null values in the way you like the very specific value such as the 999, mean values or you can create your own method to assign the missing values
df.fillna(999, inplace=True)

or the below method for mean values.

df.fillna(df.mean(), inplace=True)
You can also use the below written code for guidance on replacing your NaN values with zero values and also the infinity values with the large finite numbers. using numpy.nan_to_num.
df[:] = np.nan_to_num(df)
If you are having values which are larger than the float32 then you must try to run some scaler first.
I hope above solutions will help you in fixing your issue.
5k points
0 votes

This can be happened inside scikit and depends upon what you are doing. Read the documentation of the functions you are using. If you are using on which depends, like on your matrix is positive definite and not fulfilling the criteria.

Code Example:

If you are using the code like;

np.isnan(mat.any())   // get the result false

np.isfinite(mat.all()) // get the result True

Obviously this will generate an error message.

Solution:

You can use the above lines as;

np.any(np.isnan(mat))

And

np.all(np.isfinite(mat))

I think you want to check whether any of the elements is NAN, and nor the return value of any function is a number.

Sklearn with pandas:

If you are facing the same issue while using sklearn with pandas. The solution is to reset the index of data frame df before running any sklearn code;f

df = df.reset_index()

Remove some entries like;

df = df[df.label == ‘desired_one’]

Infinite and null values:

In most cases getting rid of infinite and null values can solve this problem.

Getting rid of infinite values:

You can do this by using the code like;

df.replace([np.inf, -np.inf], np.nan, inplace=True)

Getting rid of null values:

Get rid of null values as you like. Specifies values such as 999, or create your function to impute missing values

df.fillna(999, in place = True)

 

3.9k points