You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I’m a frequent user of DeepSeek and have found it incredibly helpful for my projects. However, I’ve encountered a limitation when working with large files, as the current system doesn’t support chunked transfers.
I’d like to suggest implementing a chunked transfer feature that would allow users to upload and process large files more efficiently. Here’s how it could work:
Split Large Files: The system would automatically split large files into smaller chunks.
Send Chunks Sequentially: Each chunk would be sent with metadata;
The start of a chunked transfer.
The end of a chunked transfer.
The total number of chunks.
The current chunk number.
This metadata will allow the tool to recognize when a chunked transfer is in progress and wait for all chunks to be received before processing the complete content.
Reassemble on the Backend: DeepSeek would reassemble the chunks into the complete file and process it as usual.
This feature would greatly enhance the user experience, especially for those working with large documents or datasets. I’ve already prototyped a local version of this feature in a collaboration tool, and it works seamlessly.
Thank you for considering this suggestion. I’d be happy to provide more details or feedback if needed.
Best regards,
The text was updated successfully, but these errors were encountered:
I hope this message finds you well. I’m a frequent user of DeepSeek and have found it incredibly helpful for my projects. However, I’ve encountered a limitation when working with large files, as the current system doesn’t support chunked transfers.
I’d like to suggest implementing a chunked transfer feature that would allow users to upload and process large files more efficiently. Here’s how it could work:
This metadata will allow the tool to recognize when a chunked transfer is in progress and wait for all chunks to be received before processing the complete content.
This feature would greatly enhance the user experience, especially for those working with large documents or datasets. I’ve already prototyped a local version of this feature in a collaboration tool, and it works seamlessly.
Thank you for considering this suggestion. I’d be happy to provide more details or feedback if needed.
Best regards,
The text was updated successfully, but these errors were encountered: