Lists (7)
Sort Name ascending (A-Z)
Stars
FoundationDB - the open source, distributed, transactional key-value store
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation
Finetune Llama 3.3, DeepSeek-R1 & Reasoning LLMs 2x faster with 70% less memory! 🦥
Fully open data curation for reasoning models
SGLang is a fast serving framework for large language models and vision language models.
A high-throughput and memory-efficient inference and serving engine for LLMs
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
A high-quality tool for convert PDF to Markdown and JSON.一站式开源高质量数据提取工具,将PDF转换成Markdown和JSON格式。
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Train transformer language models with reinforcement learning.
Llama中文社区,Llama3在线体验和微调模型已开放,实时汇总最新Llama3学习资料,已将所有代码更新适配Llama3,构建最好的中文Llama大模型,完全开源可商用
中英文敏感词、语言检测、中外手机/电话归属地/运营商查询、名字推断性别、手机号抽取、身份证抽取、邮箱抽取、中日文人名库、中文缩写库、拆字词典、词汇情感值、停用词、反动词表、暴恐词表、繁简体转换、英文模拟中文发音、汪峰歌词生成器、职业名称词库、同义词库、反义词库、否定词库、汽车品牌词库、汽车零件词库、连续英文切割、各种中文词向量、公司名字大全、古诗词库、IT词库、财经词库、成语词库、地名词库、…
Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).
A library for efficient similarity search and clustering of dense vectors.
LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.
A library that provides an embeddable, persistent key-value store for fast storage.
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editin…
Production-Grade Container Scheduling and Management
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
deepspeedai / Megatron-DeepSpeed
Forked from NVIDIA/Megatron-LMOngoing research training transformer language models at scale, including: BERT & GPT-2
Fast and memory-efficient exact attention
Transformer related optimization, including BERT, GPT
Simple, safe way to store and distribute tensors