-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pandas.errors.ParserError: Error tokenizing data. C error: Expected 44 fields in line 3169, saw 56 #2
Comments
Hello, I have also encountered this problem. Have you solved it? If you have solved it, can you tell me the solution |
that becase the document P10-Rec1-All-Date-New-Section_30.tsv lose 3 lines of data, so need some tips to recorrect~ |
Hello, I have also encountered this problem. Have you solved it? If you have solved it, can you tell me the solution |
Hello, I have also encountered this problem. Have you solved it? If you have solved it, can you tell me the solution |
Hello, could you tell me more about how to deal with this problem? thank you |
i used the simple way to recorect this problem~ just find this document named P10-Rec1-All-Date-New-Section_30.tsv, and then check the line 3159 or 3169 i couldn't remember clearly, but just check around these lines, you will find 3 lins are different, then follow the before or later line to re-add 3 lines. However, need you spent a few time to be familiar with the structure of data documents and mock them~good luck! |
Hello, I know very little about the original data file, I would be very grateful if you could share your corrected P10-Rec1-All-Date-New-Section_30.tsv file |
Hi everyone, sorry for the late reply. I tried running the code again but I don't get this error, so maybe the dataset changed? In fact, I looked and it seems I don't have this file P10-Rec1-All-Data-New-Section_30.tsv, I have P10-Rec1-All-Data-New_Section_28.tsv and then P10-Rec1-All-Data-New_Section_32.tsv. I think the fastest solution would be to just skip this file and hopefully others don't have the same problem. Otherwise, try @KONE544174974's solution, maybe they can give a bit more details as to how they solved the problem/there's a way to do it programmatically which could be shared. I would try to do it but not being able to reproduce/see the problem I can't try to come up with a solution. |
hi, while running the main file, i am getting this error. ----Loading dataset---- Running on GPU? True - gpu_num: 0 |
Hello, have you solved this problem? |
Hello, when I reproduce the code, when I run preprocessing.py file, I get the following error:
Preprocessing: 33%|███▎ | 185/565 [00:52<01:48, 3.49it/s]
Traceback (most recent call last):
File "D:\Codes\MHyEEG-main\data\preprocessing.py", line 252, in
preprocess(sessions_dir, args.save_path, args.verbose)
File "D:\Codes\MHyEEG-main\data\preprocessing.py", line 125, in preprocess
gaze_df = pd.read_csv(gaze_file, sep='\t', skiprows=23)
File "D:\anaconda3\envs\pytorch_thesis\lib\site-packages\pandas\io\parsers\readers.py", line 948, in read_csv
return _read(filepath_or_buffer, kwds)
File "D:\anaconda3\envs\pytorch_thesis\lib\site-packages\pandas\io\parsers\readers.py", line 617, in _read
return parser.read(nrows)
File "D:\anaconda3\envs\pytorch_thesis\lib\site-packages\pandas\io\parsers\readers.py", line 1748, in read
) = self._engine.read( # type: ignore[attr-defined]
File "D:\anaconda3\envs\pytorch_thesis\lib\site-packages\pandas\io\parsers\c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
File "parsers.pyx", line 843, in pandas._libs.parsers.TextReader.read_low_memory
File "parsers.pyx", line 904, in pandas._libs.parsers.TextReader._read_rows
File "parsers.pyx", line 879, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 890, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2058, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 44 fields in line 3169, saw 56
This error indicates that an error occurred while parsing the data using Pandas. Specifically, it encountered a data row that was parsed to have 56 fields, but the program expected that the row should have 44 fields. This may be caused by some lines in the data file that do not match the format expected by the program.
Excuse me, is there a corresponding solution? If so, please let me know and I would be very grateful.
The text was updated successfully, but these errors were encountered: