-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support correctly long path or filename #24224
Comments
I can confirm this is an issue. I have the problem appear when using encfs to encrypt a folder that is then synced via nextcloud. The issue crops up when the sync client tries to upload the file. It simply triggers an internal server error. |
I'm having this issue too, mostly caused by tags appended to filename by a utility "Tagspaces" which aids file organisation by the transferable and archive safe mechanism of adding tags in square brackets to the file name. I can't work out what the cut-off is for too long but I don't think it's up to even 150 characters. Using Linux debian client and unsure of server but looking into it. Slightly frustrating as the Nextcloud / Tagspaces combo makes for a long term viable lo-fi personal datebase. |
I'm having the same issue and sadly my experience is not great here with nextcloud but from what I saw the problem comes from how the current file transfer is done. Normally, on a GNU/Linux system filename shouldn't be more than 256 characters and a max path of 4096. Now that I look on it (yes I'm just a dirty anime/vn watcher):
The path doesn't exceed 4096 characters, so it's all good at this point, but there's a problem.. When we see My idea to counter this would be to use a hash (could be a fast one like crc32) of the path/filename with the transferID/parts in a temp directory, so we can reconstruct later the file with the proper path and filename where the file that has been stored on the device didn't exceed the limits. For example:
/tmp/nextcloud/B299DF4D.ocTransferId1984429488.part1 /tmp/nextcloud/B299DF4D.ocTransferId1984429488.part2 ... And retrieve the filename with the path by comparing hashes. (could be a hashmap) I think CRC32 is good because it's fast and there's 2^32 possibilities of filenames which should be way enough for this purpose. I would be glad if any developer on this project could do it but if not I guess I could give a try. Thanks. EDIT: EDITEDIT: Actually, there's still another problem when a path exeeds 4096 characters with the transferId/part which is kinda the same problem as above because of the long path used on the device that is syncing to the server would exceed after adding those. Third problem, if the device has a shorter root path of the folders being synced than the nextcloud's data directory, files won't be written if the subpaths are close to 4096 characters.. Maybe creating a virtual file system (that would be the most preferable in my opinion since you wouldn't need to care how much things are exceeding, but this is kinda reinventing the wheel if we don't use something that exists already, could be just allocating a file and using it as a device) or using only hashes (which isn't terrible idea but if you wanted to backup those files on the server directly you wouldn't know what they are directly, you'd have to look on the database, and there can be, well rarely, but some collisions) could help but the more I advance, the more I see problems. |
Same issue here. Version 3.1.3 (Ubuntu). |
I can also confirm this is the issue, but not the fault of Nextcloud. I'm on a ext4 filesystem which has a maximum of 255 characters for a filename. However, my home directory is using ecryptfs and by Mike Mabey's PhD notes, the way ecryptfs encrypts the filenames only allows for a maximum of 143 (ASCII) characters on the un-ecrypted filename - less if you have Unicode characters. So, while my Nextcloud server uses ext4's 255 character limit, any file over 143 characters stored on the server and sync'd to my home directory will fail. It's rather annoying to see the big red icon with the white 'X'. Luckily, there were only a dozen so filenames, so I just manually edited them on the server. Note: LUKS is a better encryption method imho, but the advantage to ecryptfs is no other user can view the contents of your home directory without your key, whereas LUKS is decrypted at boot or mount. |
Hello, same problem here, plus nightmare limit issue because we have Win, Linux & MacOs desktop users (thus potentially 3 different filename / path length limits). Alternate proposal:
|
The problem with suggested solutions is that they assume actual user-friendly files are uploaded. For me, the problem appeared with files encrypted with gocryptfs, which also encrypts (and substantially lengthens) filenames. I cannot shorten these filenames, but would expect NC to successfully sync them. |
This issue looks to me like issues with filesystems that you are using. I don't think that this is fixable by Nextcloud as we would need to provide workarounds for every filesystem out there that has a filename/path length limitation. cc @nextcloud/server-triage on this if fixing this is feasible. |
For me the issue appeared on Linux, which doesn't have the Windows path length limit, so I don't think the issue is with the file system I was using. Dropbox dealt with the same file path on the same OS fine. |
This is unfortunately only semi accurate. Depending on the filesystem used on linux there is indeed a path limit. Which also means that there is an intrinsic path limit on the server side at least when not using object storage. The question is whether the path limit on serverside couldn't be circumvented somehow. |
This comment has been minimized.
This comment has been minimized.
this is still a valid problem for me ! |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This is still a valid issue. |
I suggest automatically truncating all filenames + ext to a length of maximum 143, if the length of the filename is > 143 ascii characters. Worry about unicode if it comes up. This would solve any issues. |
The suggestion of @bubonic is a very good one, I would recommend to truncate name to 140 + add some random/increasing digit to ensure non duplicative name and maybe the creation of a .txt file (truncated_namefile.txt ?) that list the initial names Vs the truncated ones to help user follow action taken. |
It is a valid issue. Same problem, server Debian 11 |
I have experienced this issue. In 2022 this shouldn't reality be a problem. Do we know if this affects particular versions. I'm on 22 which I know I'm going to have to upgrade but other than this I havent experienced issues |
I have the same problem. The issue is still valid Nextcloud could provide an option to automaticaly truncate filename bigger than the given number of caracters, this will allow one to set a 143 or watever size limit... |
Stumbled over the same problem and made a quick script to truncate all filenames longer than 180 chars. It's obviously not a durable solution, but it's a cheap bandaid until someone fixes Nextcloud. Use at your own risk: |
Hi, please update to 24.0.9 or better 25.0.3 and report back if it fixes the issue. Thank you! My goal is to add a label like e.g. 25-feedback to this ticket of an up-to-date major Nextcloud version where the bug could be reproduced. However this is not going to work without your help. So thanks for all your effort! If you don't manage to reproduce the issue in time and the issue gets closed but you can reproduce the issue afterwards, feel free to create a new bug report with up-to-date information by following this link: https://github.com/nextcloud/server/issues/new?assignees=&labels=bug%2C0.+Needs+triage&template=BUG_REPORT.yml&title=%5BBug%5D%3A+ |
Seems fixed. I can successfully create a long string of nested directories that in total vastly outnumber 250 characters at least on Linux. Haven't tested it on Windows yet. Syncs fine. |
Thanks for verifying! |
Many thanks for solving it, it works! |
On windows I'm getting the error with files with filenames of 255 characters (it works with 200 characters). What's the limit? |
This is still a problem, as during transfer, an transferId is added: This error message occurs with the file. I changed the file name to remove sensible content, but the length and characteristics of all path elements is conserved:
The original filename is valid under Windows with 245 chars. During upload, a suffix is added which fails the filename length constraints. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Can you try to set Please note, the limit is 250 chars in Nextcloud (and same Database name limit for rows in |
I could not replicate the issue with NC Server (30.0.2) and NC Desktop Client (3.14.3). I had since renamed the files in question. Renaming them back does not trigger the error. |
Steps to reproduce
Expected behaviour
The file should be correctly uploaded
Actual behaviour
The GUI complains with the following error:
2020-11-03 14:23:31:820 [ warning nextcloud.gui.activity ]: Item “Bibliography/VacuumLeaks/xxxx” retrieved resulted in “File name too long”
Server configuration
Operating system:
Ubuntu 20.04 (also tested with 18.04)
Web server:
nginx version: nginx/1.18.0 (Ubuntu)
Database:
mysqld 10.3.25-MariaDB-0ubuntu0.20.04.1
PHP version:
7.4 (also 7.2 tested)
Nextcloud version: (see Nextcloud admin page)
19.0.4
Updated from an older Nextcloud/ownCloud or fresh install:
Updated
Where did you install Nextcloud from:
Website of Nextcloud
Signing status:
Signing status
No errors have been found.
List of activated apps:
App list
Enabled:
Disabled:
Nextcloud configuration:
Config report
{
"system": {
"instanceid": "REMOVED SENSITIVE VALUE",
"passwordsalt": "REMOVED SENSITIVE VALUE",
"secret": "REMOVED SENSITIVE VALUE",
"trusted_domains": [
"REMOVED SENSITIVE VALUE",
"REMOVED SENSITIVE VALUE"
],
"datadirectory": "REMOVED SENSITIVE VALUE",
"skeletondirectory": "/data/default_data",
"htaccess.RewriteBase": "/",
"dbtype": "mysql",
"version": "19.0.4.2",
"dbname": "REMOVED SENSITIVE VALUE",
"dbhost": "REMOVED SENSITIVE VALUE",
"dbport": "",
"dbtableprefix": "oc_",
"dbuser": "REMOVED SENSITIVE VALUE",
"dbpassword": "REMOVED SENSITIVE VALUE",
"installed": true,
"mail_smtpmode": "smtp",
"mail_smtpauthtype": "LOGIN",
"mail_smtpsecure": "ssl",
"mail_smtpauth": 1,
"mail_from_address": "REMOVED SENSITIVE VALUE",
"mail_domain": "REMOVED SENSITIVE VALUE",
"mail_smtphost": "REMOVED SENSITIVE VALUE",
"mail_smtpport": "465",
"mail_smtpname": "REMOVED SENSITIVE VALUE",
"mail_smtppassword": "REMOVED SENSITIVE VALUE",
"memcache.local": "\OC\Memcache\APCu",
"memcache.locking": "\OC\Memcache\Redis",
"redis": {
"host": "REMOVED SENSITIVE VALUE",
"port": 0,
"dbindex": 0,
"timeout": 1.5
},
"maintenance": false,
"theme": "",
"loglevel": 2,
"updater.release.channel": "stable",
"overwrite.cli.url": "REMOVED SENSITIVE VALUE",
"mysql.utf8mb4": true
}
}
Are you using external storage, if yes which one: local/smb/sftp/...
no
Are you using encryption: yes/no
no
Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
no
Client configuration
Browser:
N/A
Operating system:
Win 10
Logs
2020-11-03 14:23:31:820 [ warning nextcloud.gui.activity ]: Item “Bibliography/VacuumLeaks/xxxx” retrieved resulted in “File name too long”
The text was updated successfully, but these errors were encountered: