Replies: 1 comment 4 replies
-
@Gareth064 : Not sure on the lengths....did you manually inspect the 1% with a length delta? What I typically do is calculate a checksum of the source and target files and compare that...it should be identical. This implies you're pulling down the file from the target again, so performance wise that might have an impact especially if you're dealing with very large files |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
I'm looking for some advice here.
I have a service which moves documents from one Site to another. In short, to move the file, I simply create a new file in the target location using the bytes of the original file. But then I do a check to see if the length of the new file and the original are different. If they are then I throw an error because it could mean there is corruption somewhere in the new file.
Here is what I am doing (code stripped down for brevity)
That last part of the code.... should I be comparing the Length property of the newly created file against the Length of the byteArray from the original file?
99% of the time they match and everything is merry, but the other 1% they don't match and the new file length value is smaller, so it throws the error.
Problem is, we deal with possibly 1k+ documents per day through this service, so the 1% is still a hearty number of failures.
As I have been writing this, and got this far, I am now thinking I should change the lenght check to the following
Advice on this would be great. Even if that advice is "You don't need to worry about checking, it never corrupts :D"
Beta Was this translation helpful? Give feedback.
All reactions