Skip to content

Commit

Permalink
update setup.py
Browse files Browse the repository at this point in the history
  • Loading branch information
ToluClassics committed Apr 19, 2024
1 parent 3679723 commit 17c3b33
Show file tree
Hide file tree
Showing 6 changed files with 134 additions and 1 deletion.
5 changes: 4 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,17 @@
with open("requirements.txt") as f:
requirements = f.read().splitlines()

with open("README.md", "r") as fh:
long_description = fh.read()


setuptools.setup(
name="mlx-transformers",
version="0.0.1",
author="Ogundepo Odunayo",
author_email="[email protected]",
description=description,
long_description=description,
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/ToluClassics/mlx-transformers",
install_requires=requirements,
Expand Down
95 changes: 95 additions & 0 deletions src/mlx_transformers.egg-info/PKG-INFO
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
Metadata-Version: 2.1
Name: mlx-transformers
Version: 0.0.1
Summary: MLX transformers is a machine learning framework with similar Interface to Huggingface transformers using MLX core as backend.
Home-page: https://github.com/ToluClassics/mlx-transformers
Author: Ogundepo Odunayo
Author-email: [email protected]
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: huggingface-hub==0.22.2
Requires-Dist: mlx==0.8.1
Requires-Dist: numpy==1.26.4
Requires-Dist: safetensors==0.4.3
Requires-Dist: sentencepiece==0.2.0
Requires-Dist: tokenizers==0.19.1
Requires-Dist: torch==2.2.2
Requires-Dist: tqdm==4.66.2
Requires-Dist: transformers==4.40.0

# MLX Transformers

[![PyPI](https://img.shields.io/pypi/v/mlx-transformers?color=red)](https://pypi.org/project/mlx-transformers/)


MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers and provides a way to load and use models in Apple Silicon devices. Implemented models have the same modules

MLX transformers is currently only available for infernce on Apple Silicon devices. Training support will be added in the future.

# Installation

This library is available on PyPI and can be installed using pip:

```bash
pip install mlx-transformers
```


## Quick Tour

A list of the available models can be found in the `mlx_transformers.models` module and are also listed in the [section below](#available-model-architectures). The following example demonstrates how to load a model and use it for inference:

- First you need to download and convert the model checkpoint to MLX format
To do this from huggingface

```python

from transformers import BertModel
from mlx_transformers.models.utils import convert

model_name_or_path = "bert-base-uncased"
mlx_checkpoint = "bert-base-uncased.npz"

convert("bert-base-uncased", "bert-base-uncased.npz", BertModel)
```
This will download the model checkpoint from huggingface and convert it to MLX format.

- Now you can load the model using MLX transformers in few lines of code

```python
from transformers import BertConfig, BertTokenizer
from mlx_transformers.models import BertModel as MLXBertModel

tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
config = BertConfig.from_pretrained("bert-base-uncased")
model = MLXBertModel(config)

sample_input = "Hello, world!"
inputs = tokenizer(sample_input, return_tensors="np")
outputs = model(**inputs)
```


## Available Models

The following models have been ported to MLX Transformers from Huggingface for inference:

1. Bert
2. Roberta
3. XLMRoberta
4. M2M100

## Examples

Coming soon...

## Benchmarks

Coming soon...

## Contributions

Contributions to MLX transformers are welcome. See the contributing documentation for instructions on setting up a development environment.
24 changes: 24 additions & 0 deletions src/mlx_transformers.egg-info/SOURCES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
CONTRIBUTING.md
LICENSE
MANIFEST.in
README.md
pyproject.toml
requirements.txt
setup.py
src/mlx_transformers/__init__.py
src/mlx_transformers.egg-info/PKG-INFO
src/mlx_transformers.egg-info/SOURCES.txt
src/mlx_transformers.egg-info/dependency_links.txt
src/mlx_transformers.egg-info/requires.txt
src/mlx_transformers.egg-info/top_level.txt
src/mlx_transformers/models/__init__.py
src/mlx_transformers/models/bert.py
src/mlx_transformers/models/clip.py
src/mlx_transformers/models/m2m_100.py
src/mlx_transformers/models/modelling_outputs.py
src/mlx_transformers/models/roberta.py
src/mlx_transformers/models/utils.py
src/mlx_transformers/models/xlm_roberta.py
tests/test_bert.py
tests/test_roberta.py
tests/test_xlm_roberta.py
1 change: 1 addition & 0 deletions src/mlx_transformers.egg-info/dependency_links.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

9 changes: 9 additions & 0 deletions src/mlx_transformers.egg-info/requires.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
huggingface-hub==0.22.2
mlx==0.8.1
numpy==1.26.4
safetensors==0.4.3
sentencepiece==0.2.0
tokenizers==0.19.1
torch==2.2.2
tqdm==4.66.2
transformers==4.40.0
1 change: 1 addition & 0 deletions src/mlx_transformers.egg-info/top_level.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
mlx_transformers

0 comments on commit 17c3b33

Please sign in to comment.