Recent comments posted to this site:

Complete fsck is good, but once a week probably enough.

But please see if you can make fsck optional depending on if the machine is running on battery.

But Rich is right, and I was thinking the same thing earlier this morning, that delaying the lsof allows the writer to change the file and exit, and only fsck can detect the problem then. Setting file permissions doesn't help once a process already has it open for write. Which has put me off the delayed lsof idea unfortunately. lsof could be run safely during the intial annexing.
Comment by http://joeyh.name/ Fri Jun 15 15:23:21 2012

git-annex was not crashing due to content in the git-annex branch, but due to a symlink in one of your regular git branches, probably master and origin/master.

This bug is fixed in git master, if you need the fix before the next release.

Comment by http://joeyh.name/ Wed Jun 20 16:59:53 2012
Try running git annex unused --debug; this will tell us the git command that's outputing the data it cannot process. Then you can try running that git command and see what the problem filename is.
Comment by http://joeyh.name/ Wed Jun 20 14:30:27 2012
In relation to OSX support, hfsevents (or supporting hfs is probably a bad idea), its very osx specific and users who are moving usb keys and disks between systems will probably end up using fat32/exfat/vfat disks around. Also if you want I can lower the turn around time for the OSX auto-builder that I have setup to every 1 or 2mins? would that help?

Yes, the problem is fixed.

The repository was a normal git repository with path /tmp/çüş (git init) and with annex description "çüş" (git annex init çüş)

afaict, i can't reproduce the problem anymore either :-)

Doing,

sudo sysctl -w kern.maxfilesperproc=400000

Somewhat works for me, git-annex watch at least starts up and takes a while to scan the directory, but it's not ideal. Also, creating files seems to work okay, when I remove a file the changes don't seem to get pushed across my other repos, running a sync on the remote repo fixes things.

Hey Joey!

I'm not very tech savvy, but here is my question. I think for all cloud service providers, there is an upload limitation on how big one file may be. For example, I can't upload a file bigger than 100 MB on box.net. Does this affect git-annex at all? Will git-annex automatically split the file depending on the cloud provider or will I have to create small RAR archives of one large file to upload them?

Thanks! James

Ah, reproduced it; need to use the WORM backend and have the file present in another branch..
Comment by http://joeyh.name/ Wed Jun 20 14:49:09 2012
Your locale setting may also be relevant. FWIW, I've tried to create a file with \xb4 in its name and have not gotten git-annex unused to crash on it.
Comment by http://joeyh.name/ Wed Jun 20 14:34:23 2012
Comments on this page are closed.