Thanks, Issues, and Suggestions
First off, thank you; this is a perfect way to handle reading/writing images.
However, it seems that the current boot image does not support writing files to FAT32 greater than 2GB, which makes it where a full image cannot be captured. I tried multiple SD cards formatted various ways, all of which only returned an image file of 2GB. By capturing the dd output to a log file I noticed:
dd: writing '/mnt/BeagleBoneBlack-eMMC-image-9466.img': File too large 206+0 records in 204+1 records out
At first I thought it was an issue with how the SD card was formatted, however by just booting the BBB to Debian and running a dd I was able to copy 4GB to the same SD card:
dd if=/dev/zero of=/mnt/zero.img bs=10M 410+0 records in 409+0 records out 4294967295 bytes (4.3 GB) copied, 421.259 s, 10.2 MB/s
So it would seem there is a big flaw in the current build. Perhaps it is related to this issue: https://groups.google.com/forum/#!topic/android-porting/alF-avcQNfI, which mentions some older linux kernels had the same issue?
I also suggest:
* Changing the dd line in the save autorun.sh to "dd if=/dev/mmcblk1 of=/mnt/BeagleBoneBlack-eMMC-image-$RANDOM.img bs=10M &> /mnt/autorun.log", or maybe even checking the length of the dd to ensure a full write. * Providing a warning that you should check that your img file is 4GB. * Note that the SD partition has to be "active" (as SaintGimp already mentioned, I just had the same issue initially). * Note that (on at least some revisions) the beaglebone must be powered by the barrel jack for it to work (I'm on a rev c board). * Note that you must let go of the S2 button after a couple seconds.
Small update... it has the same file size limit using an ext2 partition. Weird.
There are no file size limits, e.g. by running "ulimit -a".
My only guess now is that the kernel or dd weren't compiled with LFS, e.g. the _FILE_OFFSET_BITS are not correct...
I confirmed this on a rev. C board. The images are limited to 2Gb. I agree that it looks like the save-emmc image was not compiled with large file support. There's a way around it though, I added on-the-fly compression to commands in aurorun.sh:
dd if=/dev/mmcblk1 bs=100M | gzip -c > /mnt/BBB-eMMC-$RANDOM.img.gz >>/mnt/autorun.log 2>&1
gunzip -c /mnt/BBB-stockDebian.revC.img.gz | dd of=/dev/mmcblk1 bs=100M >>/mnt/autorun.log 2>&1
(thanks for the idea to send the output to a log file by they way, this adaptation gets both stdin and stdout) It takes a lot longer: >20 minutes to compress and ~15 to decompress but the resulting stock debian image is 572 megabytes. That's from a 4 gig eMMC that is 42% full. That means that unless you are trying to image an eMMC drive full of compressed movies or images then it should always fit under the 2 gig limit, plus you can fit more images on your SD card.
Thanks for the confirmation! I thought I was going crazy...
I like the fix too. I ended up just booting a full stock image and dd'ing the image that way, but this is probably better. I guess I was kind of worried it wouldn't be able to read past the 2GB limit on mmcblk1 either.
Agreed that the 2GB limit isn't on all FAT partitions, only non-FAT32 partitions. This really isn't an issue with the shared image as I don't provide the format information (which makes it easier to work with non-Linux systems). The work-around of always compressing the images seems practical, so I added it to my image.
Thanks for all your trouble-shooting!
Oh, sorry I just sent you a talk message before I saw your reply here.
Well the current build can't even write files larger than 2GB to ext partitions. I was also able to write a 4GB file to the same SD card and FAT partition (using the stock build or a PC) that this build could only write a 2GB file to, so it almost certainly isn't a filesystem issue.
Almost all FAT partitions nowadays are FAT32 (as older versions can't even make partitions that are larger than 4GB), so it is very unlikely that is the issue either.
It's been a while since I built linux, but it would seem that this is an issue with the kernel build. I would have assumed that large file support was always built in to recent kernels, but perhaps some how it was omitted or broken on this build?
When I try to make my own version of this from the "Build steps" instructions, it runs for about 15 minutes on the "make" part then gives me this error: buildroot-save-emmc-0.0.1/output/build/linux-ddd36e546e53d3c493075bbebd6188ee843208f9/scripts/gen_initramfs_list.sh: Cannot open '../../images/rootfs.cpio.gz'
Isn't it supposed to be generating the rootfs file? What am I doing wrong?
The reason I have to make my own version of this is that my BeagleBone Green gives me a "mmc1: unrecognised EXT_CSD revision 7" error and doesn't let me write to the eMMC. So I have to patch the kernel. But I don't know where to go from there!
I *can't* be the only person in the world with this problem. I've spent over 18 hours trying to fix this problem. What can I do?