diff --git a/.github/workflows/R-CMD-check.yaml b/.github/workflows/R-CMD-check.yaml index a82b95f..4ad166e 100644 --- a/.github/workflows/R-CMD-check.yaml +++ b/.github/workflows/R-CMD-check.yaml @@ -2,9 +2,9 @@ # Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help on: push: - branches: temp + branches: main pull_request: - branches: temp + branches: main name: R package checks diff --git a/.github/workflows/quarto-site.yaml b/.github/workflows/quarto-site.yaml index 5202cc5..81ba2ac 100644 --- a/.github/workflows/quarto-site.yaml +++ b/.github/workflows/quarto-site.yaml @@ -1,7 +1,7 @@ on: push: branches: - - temp + - main name: Render and Publish diff --git a/README.md b/README.md index 6fd398a..a97e35e 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,14 @@ # mall + + +[![R-CMD-check](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml) +[![Codecov test +coverage](https://codecov.io/gh/mlverse/mall/branch/main/graph/badge.svg)](https://app.codecov.io/gh/mlverse/mall?branch=main) +[![Lifecycle: +experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental) + + diff --git a/_freeze/index/execute-results/html.json b/_freeze/index/execute-results/html.json index 5baae1d..31569e9 100644 --- a/_freeze/index/execute-results/html.json +++ b/_freeze/index/execute-results/html.json @@ -1,8 +1,8 @@ { - "hash": "65e02ca9d0d547c2bfe30dac0991b3ea", + "hash": "9dee4e73aa0ad5aaff40e3dfda26d5c9", "result": { "engine": "knitr", - "markdown": "---\nformat:\n html:\n toc: true\nexecute:\n eval: true\n freeze: true\n---\n\n\n\n\n\n\n\n\nRun multiple LLM predictions against a data frame. The predictions are processed \nrow-wise over a specified column. It works using a pre-determined one-shot prompt,\nalong with the current row's content. `mall` has been implemented for both R\nand Python. The prompt that is use will depend of the type of analysis needed. \n\nCurrently, the included prompts perform the following: \n\n- [Sentiment analysis](#sentiment)\n- [Text summarizing](#summarize)\n- [Classify text](#classify)\n- [Extract one, or several](#extract), specific pieces information from the text\n- [Translate text](#translate)\n- [Custom prompt](#custom-prompt)\n\nThis package is inspired by the SQL AI functions now offered by vendors such as\n[Databricks](https://docs.databricks.com/en/large-language-models/ai-functions.html) \nand Snowflake. `mall` uses [Ollama](https://ollama.com/) to interact with LLMs \ninstalled locally. \n\n\n\nFor **R**, that interaction takes place via the \n[`ollamar`](https://hauselin.github.io/ollama-r/) package. The functions are \ndesigned to easily work with piped commands, such as `dplyr`. \n\n```r\nreviews |>\n llm_sentiment(review)\n```\n\n\n\nFor **Python**, `mall` is a library extension to [Polars](https://pola.rs/). To\ninteract with Ollama, it uses the official\n[Python library](https://github.com/ollama/ollama-python).\n\n```python\nreviews.llm.sentiment(\"review\")\n```\n\n## Motivation\n\nWe want to new find ways to help data scientists use LLMs in their daily work. \nUnlike the familiar interfaces, such as chatting and code completion, this interface\nruns your text data directly against the LLM. \n\nThe LLM's flexibility, allows for it to adapt to the subject of your data, and \nprovide surprisingly accurate predictions. This saves the data scientist the\nneed to write and tune an NLP model. \n\nIn recent times, the capabilities of LLMs that can run locally in your computer \nhave increased dramatically. This means that these sort of analysis can run\nin your machine with good accuracy. Additionally, it makes it possible to take\nadvantage of LLM's at your institution, since the data will not leave the\ncorporate network. \n\n## Get started\n\n- Install `mall` from Github\n\n \n::: {.panel-tabset group=\"language\"}\n## R\n```r\npak::pak(\"mlverse/mall/r\")\n```\n\n## Python\n```python\npip install \"mall @ git+https://git@github.com/mlverse/mall.git#subdirectory=python\"\n```\n:::\n\n- [Download Ollama from the official website](https://ollama.com/download)\n\n- Install and start Ollama in your computer\n\n\n::: {.panel-tabset group=\"language\"}\n## R\n- Install Ollama in your machine. The `ollamar` package's website provides this\n[Installation guide](https://hauselin.github.io/ollama-r/#installation)\n\n- Download an LLM model. For example, I have been developing this package using\nLlama 3.2 to test. To get that model you can run: \n ```r\n ollamar::pull(\"llama3.2\")\n ```\n \n## Python\n\n- Install the official Ollama library\n ```python\n pip install ollama\n ```\n\n- Download an LLM model. For example, I have been developing this package using\nLlama 3.2 to test. To get that model you can run: \n ```python\n import ollama\n ollama.pull('llama3.2')\n ```\n:::\n \n#### With Databricks (R only)\n\nIf you pass a table connected to **Databricks** via `odbc`, `mall` will \nautomatically use Databricks' LLM instead of Ollama. *You won't need Ollama \ninstalled if you are using Databricks only.*\n\n`mall` will call the appropriate SQL AI function. For more information see our \n[Databricks article.](https://mlverse.github.io/mall/articles/databricks.html) \n\n## LLM functions\n\nWe will start with loading a very small data set contained in `mall`. It has\n3 product reviews that we will use as the source of our examples.\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(mall)\ndata(\"reviews\")\n\nreviews\n#> # A tibble: 3 × 1\n#> review \n#> \n#> 1 This has been the best TV I've ever used. Great screen, and sound. \n#> 2 I regret buying this laptop. It is too slow and the keyboard is too noisy \n#> 3 Not sure how to feel about my new washing machine. Great color, but hard to f…\n```\n:::\n\n\n\n## Python\n\n\n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nimport mall \ndata = mall.MallData\nreviews = data.reviews\n\nreviews \n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
review
"This has been the best TV I've ever used. Great screen, and sound."
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
\n```\n\n:::\n:::\n\n\n:::\n\n\n\n\n\n\n\n### Sentiment\n\nAutomatically returns \"positive\", \"negative\", or \"neutral\" based on the text.\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_sentiment(review)\n#> # A tibble: 3 × 2\n#> review .sentiment\n#> \n#> 1 This has been the best TV I've ever used. Great screen, and sound. positive \n#> 2 I regret buying this laptop. It is too slow and the keyboard is to… negative \n#> 3 Not sure how to feel about my new washing machine. Great color, bu… neutral\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_sentiment.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.sentiment(\"review\")\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewsentiment
"This has been the best TV I've ever used. Great screen, and sound.""positive"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""negative"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""neutral"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.sentiment) \n\n:::\n\n### Summarize\n\nThere may be a need to reduce the number of words in a given text. Typically to \nmake it easier to understand its intent. The function has an argument to \ncontrol the maximum number of words to output \n(`max_words`):\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_summarize(review, max_words = 5)\n#> # A tibble: 3 × 2\n#> review .summary \n#> \n#> 1 This has been the best TV I've ever used. Gr… it's a great tv \n#> 2 I regret buying this laptop. It is too slow … laptop purchase was a mistake \n#> 3 Not sure how to feel about my new washing ma… having mixed feelings about it\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_summarize.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.summarize(\"review\", 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewsummary
"This has been the best TV I've ever used. Great screen, and sound.""great tv with good features"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""laptop purchase was a mistake"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""feeling uncertain about new purchase"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.summarize) \n\n:::\n\n### Classify\n\nUse the LLM to categorize the text into one of the options you provide: \n\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_classify(review, c(\"appliance\", \"computer\"))\n#> # A tibble: 3 × 2\n#> review .classify\n#> \n#> 1 This has been the best TV I've ever used. Gr… computer \n#> 2 I regret buying this laptop. It is too slow … computer \n#> 3 Not sure how to feel about my new washing ma… appliance\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_classify.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.classify(\"review\", [\"computer\", \"appliance\"])\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewclassify
"This has been the best TV I've ever used. Great screen, and sound.""appliance"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""computer"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""appliance"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.classify) \n\n:::\n\n### Extract \n\nOne of the most interesting use cases Using natural language, we can tell the \nLLM to return a specific part of the text. In the following example, we request\nthat the LLM return the product being referred to. We do this by simply saying \n\"product\". The LLM understands what we *mean* by that word, and looks for that\nin the text.\n\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_extract(review, \"product\")\n#> # A tibble: 3 × 2\n#> review .extract \n#> \n#> 1 This has been the best TV I've ever used. Gr… tv \n#> 2 I regret buying this laptop. It is too slow … laptop \n#> 3 Not sure how to feel about my new washing ma… washing machine\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_extract.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.extract(\"review\", \"product\")\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewextract
"This has been the best TV I've ever used. Great screen, and sound.""tv"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""laptop"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""washing machine"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.extract) \n\n:::\n\n\n### Translate\n\nAs the title implies, this function will translate the text into a specified \nlanguage. What is really nice, it is that you don't need to specify the language\nof the source text. Only the target language needs to be defined. The translation\naccuracy will depend on the LLM\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_translate(review, \"spanish\")\n#> # A tibble: 3 × 2\n#> review .translation \n#> \n#> 1 This has been the best TV I've ever used. Gr… Esta ha sido la mejor televisió…\n#> 2 I regret buying this laptop. It is too slow … Me arrepiento de comprar este p…\n#> 3 Not sure how to feel about my new washing ma… No estoy seguro de cómo me sien…\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_translate.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.translate(\"review\", \"spanish\")\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewtranslation
"This has been the best TV I've ever used. Great screen, and sound.""Esta ha sido la mejor televisión que he utilizado hasta ahora. Gran pantalla y sonido."
"I regret buying this laptop. It is too slow and the keyboard is too noisy""Me arrepiento de comprar este portátil. Es demasiado lento y la tecla es demasiado ruidosa."
"Not sure how to feel about my new washing machine. Great color, but hard to figure""No estoy seguro de cómo sentirme con mi nueva lavadora. Un color maravilloso, pero muy difícil de en…
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.translate) \n\n:::\n\n### Custom prompt\n\nIt is possible to pass your own prompt to the LLM, and have `mall` run it \nagainst each text entry:\n\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmy_prompt <- paste(\n \"Answer a question.\",\n \"Return only the answer, no explanation\",\n \"Acceptable answers are 'yes', 'no'\",\n \"Answer this about the following text, is this a happy customer?:\"\n)\n\nreviews |>\n llm_custom(review, my_prompt)\n#> # A tibble: 3 × 2\n#> review .pred\n#> \n#> 1 This has been the best TV I've ever used. Great screen, and sound. Yes \n#> 2 I regret buying this laptop. It is too slow and the keyboard is too noi… No \n#> 3 Not sure how to feel about my new washing machine. Great color, but har… No\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_custom.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nmy_prompt = (\n \"Answer a question.\"\n \"Return only the answer, no explanation\"\n \"Acceptable answers are 'yes', 'no'\"\n \"Answer this about the following text, is this a happy customer?:\"\n)\n\nreviews.llm.custom(\"review\", prompt = my_prompt)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewcustom
"This has been the best TV I've ever used. Great screen, and sound.""Yes"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""No"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""No"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.custom) \n\n:::\n\n## Model selection and settings\n\nYou can set the model and its options to use when calling the LLM. In this case,\nwe refer to options as model specific things that can be set, such as seed or\ntemperature. \n\n::: {.panel-tabset group=\"language\"}\n## R\n\nInvoking an `llm` function will automatically initialize a model selection\nif you don't have one selected yet. If there is only one option, it will \npre-select it for you. If there are more than one available models, then `mall`\nwill present you as menu selection so you can select which model you wish to \nuse.\n\nCalling `llm_use()` directly will let you specify the model and backend to use.\nYou can also setup additional arguments that will be passed down to the \nfunction that actually runs the prediction. In the case of Ollama, that function\nis [`chat()`](https://hauselin.github.io/ollama-r/reference/chat.html). \n\nThe model to use, and other options can be set for the current R session\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_use(\"ollama\", \"llama3.2\", seed = 100, temperature = 0)\n```\n:::\n\n\n\n\n## Python \n\nThe model and options to be used will be defined at the Polars data frame \nobject level. If not passed, the default model will be **llama3.2**.\n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.use(\"ollama\", \"llama3.2\", options = dict(seed = 100))\n```\n:::\n\n\n\n:::\n\n#### Results caching \n\nBy default `mall` caches the requests and corresponding results from a given\nLLM run. Each response is saved as individual JSON files. By default, the folder\nname is `_mall_cache`. The folder name can be customized, if needed. Also, the\ncaching can be turned off by setting the argument to empty (`\"\"`).\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_use(.cache = \"_my_cache\")\n```\n:::\n\n\n\nTo turn off:\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_use(.cache = \"\")\n```\n:::\n\n\n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.use(_cache = \"my_cache\")\n```\n:::\n\n\n\nTo turn off:\n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.use(_cache = \"\")\n```\n:::\n\n\n\n:::\n\nFor more information see the [Caching Results](articles/caching.qmd) article. \n\n## Key considerations\n\nThe main consideration is **cost**. Either, time cost, or money cost.\n\nIf using this method with an LLM locally available, the cost will be a long \nrunning time. Unless using a very specialized LLM, a given LLM is a general model. \nIt was fitted using a vast amount of data. So determining a response for each \nrow, takes longer than if using a manually created NLP model. The default model\nused in Ollama is [Llama 3.2](https://ollama.com/library/llama3.2), \nwhich was fitted using 3B parameters. \n\nIf using an external LLM service, the consideration will need to be for the \nbilling costs of using such service. Keep in mind that you will be sending a lot\nof data to be evaluated. \n\nAnother consideration is the novelty of this approach. Early tests are \nproviding encouraging results. But you, as an user, will still need to keep\nin mind that the predictions will not be infallible, so always check the output.\nAt this time, I think the best use for this method, is for a quick analysis.\n\n\n## Vector functions (R only)\n\n`mall` includes functions that expect a vector, instead of a table, to run the\npredictions. This should make it easier to test things, such as custom prompts\nor results of specific text. Each `llm_` function has a corresponding `llm_vec_`\nfunction:\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_vec_sentiment(\"I am happy\")\n#> [1] \"positive\"\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_vec_translate(\"Este es el mejor dia!\", \"english\")\n#> [1] \"It's the best day!\"\n```\n:::\n", + "markdown": "---\nformat:\n html:\n toc: true\nexecute:\n eval: true\n freeze: true\n---\n\n\n\n\n\n\n\n\n\n\n[![R package check](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml)\n[![R package coverage](https://codecov.io/gh/mlverse/mall/branch/main/graph/badge.svg)](https://app.codecov.io/gh/mlverse/mall?branch=main)\n[![Lifecycle:\nexperimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental)\n\n\n\nRun multiple LLM predictions against a data frame. The predictions are processed \nrow-wise over a specified column. It works using a pre-determined one-shot prompt,\nalong with the current row's content. `mall` has been implemented for both R\nand Python. The prompt that is use will depend of the type of analysis needed. \n\nCurrently, the included prompts perform the following: \n\n- [Sentiment analysis](#sentiment)\n- [Text summarizing](#summarize)\n- [Classify text](#classify)\n- [Extract one, or several](#extract), specific pieces information from the text\n- [Translate text](#translate)\n- [Custom prompt](#custom-prompt)\n\nThis package is inspired by the SQL AI functions now offered by vendors such as\n[Databricks](https://docs.databricks.com/en/large-language-models/ai-functions.html) \nand Snowflake. `mall` uses [Ollama](https://ollama.com/) to interact with LLMs \ninstalled locally. \n\n\n\nFor **R**, that interaction takes place via the \n[`ollamar`](https://hauselin.github.io/ollama-r/) package. The functions are \ndesigned to easily work with piped commands, such as `dplyr`. \n\n```r\nreviews |>\n llm_sentiment(review)\n```\n\n\n\nFor **Python**, `mall` is a library extension to [Polars](https://pola.rs/). To\ninteract with Ollama, it uses the official\n[Python library](https://github.com/ollama/ollama-python).\n\n```python\nreviews.llm.sentiment(\"review\")\n```\n\n## Motivation\n\nWe want to new find ways to help data scientists use LLMs in their daily work. \nUnlike the familiar interfaces, such as chatting and code completion, this interface\nruns your text data directly against the LLM. \n\nThe LLM's flexibility, allows for it to adapt to the subject of your data, and \nprovide surprisingly accurate predictions. This saves the data scientist the\nneed to write and tune an NLP model. \n\nIn recent times, the capabilities of LLMs that can run locally in your computer \nhave increased dramatically. This means that these sort of analysis can run\nin your machine with good accuracy. Additionally, it makes it possible to take\nadvantage of LLM's at your institution, since the data will not leave the\ncorporate network. \n\n## Get started\n\n- Install `mall` from Github\n\n \n::: {.panel-tabset group=\"language\"}\n## R\n```r\npak::pak(\"mlverse/mall/r\")\n```\n\n## Python\n```python\npip install \"mall @ git+https://git@github.com/mlverse/mall.git#subdirectory=python\"\n```\n:::\n\n- [Download Ollama from the official website](https://ollama.com/download)\n\n- Install and start Ollama in your computer\n\n\n::: {.panel-tabset group=\"language\"}\n## R\n- Install Ollama in your machine. The `ollamar` package's website provides this\n[Installation guide](https://hauselin.github.io/ollama-r/#installation)\n\n- Download an LLM model. For example, I have been developing this package using\nLlama 3.2 to test. To get that model you can run: \n ```r\n ollamar::pull(\"llama3.2\")\n ```\n \n## Python\n\n- Install the official Ollama library\n ```python\n pip install ollama\n ```\n\n- Download an LLM model. For example, I have been developing this package using\nLlama 3.2 to test. To get that model you can run: \n ```python\n import ollama\n ollama.pull('llama3.2')\n ```\n:::\n \n#### With Databricks (R only)\n\nIf you pass a table connected to **Databricks** via `odbc`, `mall` will \nautomatically use Databricks' LLM instead of Ollama. *You won't need Ollama \ninstalled if you are using Databricks only.*\n\n`mall` will call the appropriate SQL AI function. For more information see our \n[Databricks article.](https://mlverse.github.io/mall/articles/databricks.html) \n\n## LLM functions\n\nWe will start with loading a very small data set contained in `mall`. It has\n3 product reviews that we will use as the source of our examples.\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(mall)\ndata(\"reviews\")\n\nreviews\n#> # A tibble: 3 × 1\n#> review \n#> \n#> 1 This has been the best TV I've ever used. Great screen, and sound. \n#> 2 I regret buying this laptop. It is too slow and the keyboard is too noisy \n#> 3 Not sure how to feel about my new washing machine. Great color, but hard to f…\n```\n:::\n\n\n\n## Python\n\n\n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nimport mall \ndata = mall.MallData\nreviews = data.reviews\n\nreviews \n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
review
"This has been the best TV I've ever used. Great screen, and sound."
"I regret buying this laptop. It is too slow and the keyboard is too noisy"
"Not sure how to feel about my new washing machine. Great color, but hard to figure"
\n```\n\n:::\n:::\n\n\n:::\n\n\n\n\n\n\n\n### Sentiment\n\nAutomatically returns \"positive\", \"negative\", or \"neutral\" based on the text.\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_sentiment(review)\n#> # A tibble: 3 × 2\n#> review .sentiment\n#> \n#> 1 This has been the best TV I've ever used. Great screen, and sound. positive \n#> 2 I regret buying this laptop. It is too slow and the keyboard is to… negative \n#> 3 Not sure how to feel about my new washing machine. Great color, bu… neutral\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_sentiment.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.sentiment(\"review\")\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewsentiment
"This has been the best TV I've ever used. Great screen, and sound.""positive"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""negative"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""neutral"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.sentiment) \n\n:::\n\n### Summarize\n\nThere may be a need to reduce the number of words in a given text. Typically to \nmake it easier to understand its intent. The function has an argument to \ncontrol the maximum number of words to output \n(`max_words`):\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_summarize(review, max_words = 5)\n#> # A tibble: 3 × 2\n#> review .summary \n#> \n#> 1 This has been the best TV I've ever used. Gr… it's a great tv \n#> 2 I regret buying this laptop. It is too slow … laptop purchase was a mistake \n#> 3 Not sure how to feel about my new washing ma… having mixed feelings about it\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_summarize.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.summarize(\"review\", 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewsummary
"This has been the best TV I've ever used. Great screen, and sound.""great tv with good features"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""laptop purchase was a mistake"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""feeling uncertain about new purchase"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.summarize) \n\n:::\n\n### Classify\n\nUse the LLM to categorize the text into one of the options you provide: \n\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_classify(review, c(\"appliance\", \"computer\"))\n#> # A tibble: 3 × 2\n#> review .classify\n#> \n#> 1 This has been the best TV I've ever used. Gr… computer \n#> 2 I regret buying this laptop. It is too slow … computer \n#> 3 Not sure how to feel about my new washing ma… appliance\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_classify.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.classify(\"review\", [\"computer\", \"appliance\"])\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewclassify
"This has been the best TV I've ever used. Great screen, and sound.""appliance"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""computer"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""appliance"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.classify) \n\n:::\n\n### Extract \n\nOne of the most interesting use cases Using natural language, we can tell the \nLLM to return a specific part of the text. In the following example, we request\nthat the LLM return the product being referred to. We do this by simply saying \n\"product\". The LLM understands what we *mean* by that word, and looks for that\nin the text.\n\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_extract(review, \"product\")\n#> # A tibble: 3 × 2\n#> review .extract \n#> \n#> 1 This has been the best TV I've ever used. Gr… tv \n#> 2 I regret buying this laptop. It is too slow … laptop \n#> 3 Not sure how to feel about my new washing ma… washing machine\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_extract.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.extract(\"review\", \"product\")\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewextract
"This has been the best TV I've ever used. Great screen, and sound.""tv"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""laptop"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""washing machine"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.extract) \n\n:::\n\n\n### Translate\n\nAs the title implies, this function will translate the text into a specified \nlanguage. What is really nice, it is that you don't need to specify the language\nof the source text. Only the target language needs to be defined. The translation\naccuracy will depend on the LLM\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nreviews |>\n llm_translate(review, \"spanish\")\n#> # A tibble: 3 × 2\n#> review .translation \n#> \n#> 1 This has been the best TV I've ever used. Gr… Esta ha sido la mejor televisió…\n#> 2 I regret buying this laptop. It is too slow … Me arrepiento de comprar este p…\n#> 3 Not sure how to feel about my new washing ma… No estoy seguro de cómo me sien…\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_translate.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.translate(\"review\", \"spanish\")\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewtranslation
"This has been the best TV I've ever used. Great screen, and sound.""Esta ha sido la mejor televisión que he utilizado hasta ahora. Gran pantalla y sonido."
"I regret buying this laptop. It is too slow and the keyboard is too noisy""Me arrepiento de comprar este portátil. Es demasiado lento y la tecla es demasiado ruidosa."
"Not sure how to feel about my new washing machine. Great color, but hard to figure""No estoy seguro de cómo sentirme con mi nueva lavadora. Un color maravilloso, pero muy difícil de en…
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.translate) \n\n:::\n\n### Custom prompt\n\nIt is possible to pass your own prompt to the LLM, and have `mall` run it \nagainst each text entry:\n\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nmy_prompt <- paste(\n \"Answer a question.\",\n \"Return only the answer, no explanation\",\n \"Acceptable answers are 'yes', 'no'\",\n \"Answer this about the following text, is this a happy customer?:\"\n)\n\nreviews |>\n llm_custom(review, my_prompt)\n#> # A tibble: 3 × 2\n#> review .pred\n#> \n#> 1 This has been the best TV I've ever used. Great screen, and sound. Yes \n#> 2 I regret buying this laptop. It is too slow and the keyboard is too noi… No \n#> 3 Not sure how to feel about my new washing machine. Great color, but har… No\n```\n:::\n\n\n\nFor more information and examples visit this function's \n[R reference page](reference/llm_custom.qmd) \n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nmy_prompt = (\n \"Answer a question.\"\n \"Return only the answer, no explanation\"\n \"Acceptable answers are 'yes', 'no'\"\n \"Answer this about the following text, is this a happy customer?:\"\n)\n\nreviews.llm.custom(\"review\", prompt = my_prompt)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n
reviewcustom
"This has been the best TV I've ever used. Great screen, and sound.""Yes"
"I regret buying this laptop. It is too slow and the keyboard is too noisy""No"
"Not sure how to feel about my new washing machine. Great color, but hard to figure""No"
\n```\n\n:::\n:::\n\n\n\nFor more information and examples visit this function's \n[Python reference page](reference/MallFrame.qmd#mall.MallFrame.custom) \n\n:::\n\n## Model selection and settings\n\nYou can set the model and its options to use when calling the LLM. In this case,\nwe refer to options as model specific things that can be set, such as seed or\ntemperature. \n\n::: {.panel-tabset group=\"language\"}\n## R\n\nInvoking an `llm` function will automatically initialize a model selection\nif you don't have one selected yet. If there is only one option, it will \npre-select it for you. If there are more than one available models, then `mall`\nwill present you as menu selection so you can select which model you wish to \nuse.\n\nCalling `llm_use()` directly will let you specify the model and backend to use.\nYou can also setup additional arguments that will be passed down to the \nfunction that actually runs the prediction. In the case of Ollama, that function\nis [`chat()`](https://hauselin.github.io/ollama-r/reference/chat.html). \n\nThe model to use, and other options can be set for the current R session\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_use(\"ollama\", \"llama3.2\", seed = 100, temperature = 0)\n```\n:::\n\n\n\n\n## Python \n\nThe model and options to be used will be defined at the Polars data frame \nobject level. If not passed, the default model will be **llama3.2**.\n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.use(\"ollama\", \"llama3.2\", options = dict(seed = 100))\n```\n:::\n\n\n\n:::\n\n#### Results caching \n\nBy default `mall` caches the requests and corresponding results from a given\nLLM run. Each response is saved as individual JSON files. By default, the folder\nname is `_mall_cache`. The folder name can be customized, if needed. Also, the\ncaching can be turned off by setting the argument to empty (`\"\"`).\n\n::: {.panel-tabset group=\"language\"}\n## R\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_use(.cache = \"_my_cache\")\n```\n:::\n\n\n\nTo turn off:\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_use(.cache = \"\")\n```\n:::\n\n\n\n## Python \n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.use(_cache = \"my_cache\")\n```\n:::\n\n\n\nTo turn off:\n\n\n\n::: {.cell}\n\n```{.python .cell-code}\nreviews.llm.use(_cache = \"\")\n```\n:::\n\n\n\n:::\n\nFor more information see the [Caching Results](articles/caching.qmd) article. \n\n## Key considerations\n\nThe main consideration is **cost**. Either, time cost, or money cost.\n\nIf using this method with an LLM locally available, the cost will be a long \nrunning time. Unless using a very specialized LLM, a given LLM is a general model. \nIt was fitted using a vast amount of data. So determining a response for each \nrow, takes longer than if using a manually created NLP model. The default model\nused in Ollama is [Llama 3.2](https://ollama.com/library/llama3.2), \nwhich was fitted using 3B parameters. \n\nIf using an external LLM service, the consideration will need to be for the \nbilling costs of using such service. Keep in mind that you will be sending a lot\nof data to be evaluated. \n\nAnother consideration is the novelty of this approach. Early tests are \nproviding encouraging results. But you, as an user, will still need to keep\nin mind that the predictions will not be infallible, so always check the output.\nAt this time, I think the best use for this method, is for a quick analysis.\n\n\n## Vector functions (R only)\n\n`mall` includes functions that expect a vector, instead of a table, to run the\npredictions. This should make it easier to test things, such as custom prompts\nor results of specific text. Each `llm_` function has a corresponding `llm_vec_`\nfunction:\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_vec_sentiment(\"I am happy\")\n#> [1] \"positive\"\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nllm_vec_translate(\"Este es el mejor dia!\", \"english\")\n#> [1] \"It's the best day!\"\n```\n:::\n", "supporting": [], "filters": [ "rmarkdown/pagebreak.lua" diff --git a/index.qmd b/index.qmd index fdefef6..7e165ba 100644 --- a/index.qmd +++ b/index.qmd @@ -23,6 +23,15 @@ mall::llm_use("ollama", "llama3.2", seed = 100, .cache = "_readme_cache") + + +[![R package check](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml) +[![R package coverage](https://codecov.io/gh/mlverse/mall/branch/main/graph/badge.svg)](https://app.codecov.io/gh/mlverse/mall?branch=main) +[![Lifecycle: +experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental) + + + Run multiple LLM predictions against a data frame. The predictions are processed row-wise over a specified column. It works using a pre-determined one-shot prompt, along with the current row's content. `mall` has been implemented for both R diff --git a/python/README.md b/python/README.md deleted file mode 100644 index 2eec60b..0000000 --- a/python/README.md +++ /dev/null @@ -1,100 +0,0 @@ -# mall - -## Intro - -Run multiple LLM predictions against a data frame. The predictions are -processed row-wise over a specified column. It works using a -pre-determined one-shot prompt, along with the current row’s content. - -## Install - -To install from Github, use: - -``` python -pip install "mall @ git+https://git@github.com/mlverse/mall.git@python#subdirectory=python" -``` - -## Examples - -``` python -import mall -import polars as pl - -reviews = pl.DataFrame( - data=[ - "This has been the best TV I've ever used. Great screen, and sound.", - "I regret buying this laptop. It is too slow and the keyboard is too noisy", - "Not sure how to feel about my new washing machine. Great color, but hard to figure" - ], - schema=[("review", pl.String)], -) -``` - -## Sentiment - - -``` python -reviews.llm.sentiment("review") -``` - -shape: (3, 2) - -| review | sentiment | -|----------------------------------|------------| -| str | str | -| "This has been the best TV I've… | "positive" | -| "I regret buying this laptop. I… | "negative" | -| "Not sure how to feel about my … | "neutral" | - -## Summarize - -``` python -reviews.llm.summarize("review", 5) -``` - -shape: (3, 2) - -| review | summary | -|----------------------------------|----------------------------------| -| str | str | -| "This has been the best TV I've… | "it's a great tv" | -| "I regret buying this laptop. I… | "laptop not worth the money" | -| "Not sure how to feel about my … | "feeling uncertain about new pu… | - -## Translate (as in ‘English to French’) - -``` python -reviews.llm.translate("review", "spanish") -``` - -shape: (3, 2) - -| review | translation | -|----------------------------------|----------------------------------| -| str | str | -| "This has been the best TV I've… | "Esta ha sido la mejor TV que h… | -| "I regret buying this laptop. I… | "Lo lamento comprar este portát… | -| "Not sure how to feel about my … | "No estoy seguro de cómo sentir… | - -## Classify - -``` python -reviews.llm.classify("review", ["computer", "appliance"]) -``` - -shape: (3, 2) - -| review | classify | -|----------------------------------|-------------| -| str | str | -| "This has been the best TV I've… | "appliance" | -| "I regret buying this laptop. I… | "appliance" | -| "Not sure how to feel about my … | "appliance" | - -## LLM session setup - -``` python -reviews.llm.use(options = dict(seed = 100)) -``` - - {'backend': 'ollama', 'model': 'llama3.2', 'options': {'seed': 100}} diff --git a/python/README.qmd b/python/README.qmd deleted file mode 100644 index 318c86c..0000000 --- a/python/README.qmd +++ /dev/null @@ -1,71 +0,0 @@ ---- -format: gfm ---- - -# mall - -## Intro - -Run multiple LLM predictions against a data frame. The predictions are processed row-wise over a specified column. It works using a pre-determined one-shot prompt, along with the current row’s content. - -## Install - -To install from Github, use: - -```python -pip install "mall @ git+https://git@github.com/mlverse/mall.git@python#subdirectory=python" -``` - -## Examples - -```{python} -#| include: false -import polars as pl -from polars.dataframe._html import HTMLFormatter -html_formatter = get_ipython().display_formatter.formatters['text/html'] -html_formatter.for_type(pl.DataFrame, lambda df: "\n".join(HTMLFormatter(df).render())) -``` - - -```{python} -import mall -import polars as pl -data = mall.MallData -reviews = data.reviews -``` - -```{python} -#| include: false -reviews.llm.use(options = dict(seed = 100)) -``` - - -## Sentiment - -```{python} -reviews.llm.sentiment("review") -``` - -## Summarize - -```{python} -reviews.llm.summarize("review", 5) -``` - -## Translate (as in 'English to French') - -```{python} -reviews.llm.translate("review", "spanish") -``` - -## Classify - -```{python} -reviews.llm.classify("review", ["computer", "appliance"]) -``` - -## LLM session setup - -```{python} -reviews.llm.use(options = dict(seed = 100)) -``` diff --git a/r/README.Rmd b/r/README.Rmd deleted file mode 100644 index 4bd15f1..0000000 --- a/r/README.Rmd +++ /dev/null @@ -1,333 +0,0 @@ ---- -output: github_document ---- - - - -```{r, include = FALSE} -knitr::opts_chunk$set( - collapse = TRUE, - comment = "#>", - fig.path = "man/figures/README-", - out.width = "100%" -) -library(dplyr) -library(dbplyr) -library(tictoc) -library(DBI) -source("utils/knitr-print.R") -mall::llm_use("ollama", "llama3.2", seed = 100, .cache = "_readme_cache") -``` - -# mall - - - - -[![R-CMD-check](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml) -[![Codecov test coverage](https://codecov.io/gh/mlverse/mall/branch/main/graph/badge.svg)](https://app.codecov.io/gh/mlverse/mall?branch=main) -[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental) - - -## Intro - -Run multiple LLM predictions against a data frame. The predictions are processed -row-wise over a specified column. It works using a pre-determined one-shot prompt, -along with the current row's content. The prompt that is use will depend of the -type of analysis needed. Currently, the included prompts perform the following: - -- [Sentiment analysis](#sentiment) -- [Text summarizing](#summarize) -- [Classify text](#classify) -- [Extract one, or several](#extract), specific pieces information from the text -- [Translate text](#translate) -- [Custom prompt](#custom-prompt) - -This package is inspired by the SQL AI functions now offered by vendors such as -[Databricks](https://docs.databricks.com/en/large-language-models/ai-functions.html) -and Snowflake. `mall` uses [Ollama](https://ollama.com/) to -interact with LLMs installed locally. That interaction takes place via the -[`ollamar`](https://hauselin.github.io/ollama-r/) package. - -## Motivation - -We want to new find ways to help data scientists use LLMs in their daily work. -Unlike the familiar interfaces, such as chatting and code completion, this interface -runs your text data directly against the LLM. The LLM's flexibility, allows for -it to adapt to the subject of your data, and provide surprisingly accurate predictions. -This saves the data scientist the need to write and tune an NLP model. - -```{r} -#| include: false - -# Add paragraph about: thanks to the more widespread availability of capable -# local llms, data does not leave your company, no $$ cost to use - -``` - -## Get started - -- Install `mall` from Github - ```r - pak::pak("mlverse/mall") - ``` - -### With local LLMs - -- Install Ollama in your machine. The `ollamar` package's website provides this -[Installation guide](https://hauselin.github.io/ollama-r/#installation) - -- Download an LLM model. For example, I have been developing this package using -Llama 3.2 to test. To get that model you can run: - ```r - ollamar::pull("llama3.2") - ``` - -### With Databricks - -If you pass a table connected to **Databricks** via `odbc`, `mall` will -automatically use Databricks' LLM instead of Ollama. *You won't need Ollama -installed if you are using Databricks only.* - -`mall` will call the appropriate SQL AI function. For more information see our -[Databricks article.](https://mlverse.github.io/mall/articles/databricks.html) - -## LLM functions - -### Sentiment - -Primarily, `mall` provides verb-like functions that expect a `tbl` as -their first argument. This allows us to use them in piped operations. - -We will start with loading a very small data set contained in `mall`. It has -3 product reviews that we will use as the source of our examples. - -```{r} -library(mall) - -data("reviews") - -reviews -``` - -For the first example, we'll asses the sentiment of each review. In order to -do this we will call `llm_sentiment()`: - -```{r} -reviews |> - llm_sentiment(review) -``` - -The function let's us modify the options to choose from: - -```{r} -reviews |> - llm_sentiment(review, options = c("positive", "negative")) -``` - -As mentioned before, by being pipe friendly, the results from the LLM prediction -can be used in further transformations: - -```{r} -reviews |> - llm_sentiment(review, options = c("positive", "negative")) |> - filter(.sentiment == "negative") -``` - -### Summarize - -There may be a need to reduce the number of words in a given text. Usually, to -make it easier to capture its intent. To do this, use `llm_summarize()`. This -function has an argument to control the maximum number of words to output -(`max_words`): - -```{r} -reviews |> - llm_summarize(review, max_words = 5) -``` - -To control the name of the prediction field, you can change `pred_name` argument. -This works with the other `llm_` functions as well. - -```{r} -reviews |> - llm_summarize(review, max_words = 5, pred_name = "review_summary") -``` - -### Classify - -Use the LLM to categorize the text into one of the options you provide: - -```{r} -reviews |> - llm_classify(review, c("appliance", "computer")) -``` - -### Extract - -One of the most interesting operations. Using natural language, we can tell the -LLM to return a specific part of the text. In the following example, we request -that the LLM return the product being referred to. We do this by simply saying -"product". The LLM understands what we *mean* by that word, and looks for that -in the text. - - -```{r} -reviews |> - llm_extract(review, "product") -``` - -### Translate - -As the title implies, this function will translate the text into a specified -language. What is really nice, it is that you don't need to specify the language -of the source text. Only the target language needs to be defined. The translation -accuracy will depend on the LLM - -```{r} -reviews |> - llm_translate(review, "spanish") -``` - - -### Custom prompt - -It is possible to pass your own prompt to the LLM, and have `mall` run it -against each text entry. Use `llm_custom()` to access this functionality: - -```{r} -my_prompt <- paste( - "Answer a question.", - "Return only the answer, no explanation", - "Acceptable answers are 'yes', 'no'", - "Answer this about the following text, is this a happy customer?:" -) - -reviews |> - llm_custom(review, my_prompt) -``` - -## Initialize session - -Invoking an `llm_` function will automatically initialize a model selection -if you don't have one selected yet. If there is only one option, it will -pre-select it for you. If there are more than one available models, then `mall` -will present you as menu selection so you can select which model you wish to -use. - -Calling `llm_use()` directly will let you specify the model and backend to use. -You can also setup additional arguments that will be passed down to the -function that actually runs the prediction. In the case of Ollama, that function -is [`chat()`](https://hauselin.github.io/ollama-r/reference/chat.html). - -```{r, eval = FALSE} -llm_use("ollama", "llama3.2", seed = 100, temperature = 0) -``` - -## Key considerations - -The main consideration is **cost**. Either, time cost, or money cost. - -If using this method with an LLM locally available, the cost will be a long -running time. Unless using a very specialized LLM, a given LLM is a general model. -It was fitted using a vast amount of data. So determining a response for each -row, takes longer than if using a manually created NLP model. The default model -used in Ollama is [Llama 3.2](https://ollama.com/library/llama3.2), -which was fitted using 3B parameters. - -If using an external LLM service, the consideration will need to be for the -billing costs of using such service. Keep in mind that you will be sending a lot -of data to be evaluated. - -Another consideration is the novelty of this approach. Early tests are -providing encouraging results. But you, as an user, will still need to keep -in mind that the predictions will not be infallible, so always check the output. -At this time, I think the best use for this method, is for a quick analysis. - -## Performance - -We will briefly cover this methods performance from two perspectives: - -- How long the analysis takes to run locally - -- How well it predicts - -To do so, we will use the `data_bookReviews` data set, provided by the `classmap` -package. For this exercise, only the first 100, of the total 1,000, are going -to be part of this analysis. - -```{r} -library(classmap) - -data(data_bookReviews) - -data_bookReviews |> - glimpse() -``` -As per the docs, `sentiment` is a factor indicating the sentiment of the review: -negative (1) or positive (2) - -```{r} -length(strsplit(paste(head(data_bookReviews$review, 100), collapse = " "), " ")[[1]]) -``` - -Just to get an idea of how much data we're processing, I'm using a very, very -simple word count. So we're analyzing a bit over 20 thousand words. - -```{r} -reviews_llm <- data_bookReviews |> - head(100) |> - llm_sentiment( - col = review, - options = c("positive" ~ 2, "negative" ~ 1), - pred_name = "predicted" - ) -``` - -As far as **time**, on my Apple M3 machine, it took about 1.5 minutes to process, -100 rows, containing 20 thousand words. Setting `temp` to 0 in `llm_use()`, -made the model run faster. - -The package uses `purrr` to send each prompt individually to the LLM. But, I did -try a few different ways to speed up the process, unsuccessfully: - -- Used `furrr` to send multiple requests at a time. This did not work because -either the LLM or Ollama processed all my requests serially. So there was -no improvement. - -- I also tried sending more than one row's text at a time. This cause instability -in the number of results. For example sending 5 at a time, sometimes returned 7 -or 8. Even sending 2 was not stable. - -This is what the new table looks like: - -```{r} -reviews_llm -``` - -I used `yardstick` to see how well the model performed. Of course, the accuracy -will not be of the "truth", but rather the package's results recorded in -`sentiment`. - -```{r} -library(forcats) - -reviews_llm |> - mutate(predicted = as.factor(predicted)) |> - yardstick::accuracy(sentiment, predicted) -``` - -## Vector functions - -`mall` includes functions that expect a vector, instead of a table, to run the -predictions. This should make it easier to test things, such as custom prompts -or results of specific text. Each `llm_` function has a corresponding `llm_vec_` -function: - -```{r} -llm_vec_sentiment("I am happy") -``` - -```{r} -llm_vec_translate("Este es el mejor dia!", "english") -``` diff --git a/r/README.md b/r/README.md deleted file mode 100644 index e95116f..0000000 --- a/r/README.md +++ /dev/null @@ -1,407 +0,0 @@ - - - -# mall - - - - - -[![R-CMD-check](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/mlverse/mall/actions/workflows/R-CMD-check.yaml) -[![Codecov test -coverage](https://codecov.io/gh/mlverse/mall/branch/main/graph/badge.svg)](https://app.codecov.io/gh/mlverse/mall?branch=main) -[![Lifecycle: -experimental](https://img.shields.io/badge/lifecycle-experimental-orange.svg)](https://lifecycle.r-lib.org/articles/stages.html#experimental) - - -## Intro - -Run multiple LLM predictions against a data frame. The predictions are -processed row-wise over a specified column. It works using a -pre-determined one-shot prompt, along with the current row’s content. -The prompt that is use will depend of the type of analysis needed. -Currently, the included prompts perform the following: - -- [Sentiment analysis](#sentiment) -- [Text summarizing](#summarize) -- [Classify text](#classify) -- [Extract one, or several](#extract), specific pieces information from - the text -- [Translate text](#translate) -- [Custom prompt](#custom-prompt) - -This package is inspired by the SQL AI functions now offered by vendors -such as -[Databricks](https://docs.databricks.com/en/large-language-models/ai-functions.html) -and Snowflake. `mall` uses [Ollama](https://ollama.com/) to interact -with LLMs installed locally. That interaction takes place via the -[`ollamar`](https://hauselin.github.io/ollama-r/) package. - -## Motivation - -We want to new find ways to help data scientists use LLMs in their daily -work. Unlike the familiar interfaces, such as chatting and code -completion, this interface runs your text data directly against the LLM. -The LLM’s flexibility, allows for it to adapt to the subject of your -data, and provide surprisingly accurate predictions. This saves the data -scientist the need to write and tune an NLP model. - -## Get started - -- Install `mall` from Github - - ``` r - pak::pak("mlverse/mall") - ``` - -### With local LLMs - -- Install Ollama in your machine. The `ollamar` package’s website - provides this [Installation - guide](https://hauselin.github.io/ollama-r/#installation) - -- Download an LLM model. For example, I have been developing this - package using Llama 3.2 to test. To get that model you can run: - - ``` r - ollamar::pull("llama3.2") - ``` - -### With Databricks - -If you pass a table connected to **Databricks** via `odbc`, `mall` will -automatically use Databricks’ LLM instead of Ollama. *You won’t need -Ollama installed if you are using Databricks only.* - -`mall` will call the appropriate SQL AI function. For more information -see our [Databricks -article.](https://mlverse.github.io/mall/articles/databricks.html) - -## LLM functions - -### Sentiment - -Primarily, `mall` provides verb-like functions that expect a `tbl` as -their first argument. This allows us to use them in piped operations. - -We will start with loading a very small data set contained in `mall`. It -has 3 product reviews that we will use as the source of our examples. - -``` r -library(mall) - -data("reviews") - -reviews -#> # A tibble: 3 × 1 -#> review -#> -#> 1 This has been the best TV I've ever used. Great screen, and sound. -#> 2 I regret buying this laptop. It is too slow and the keyboard is too noisy -#> 3 Not sure how to feel about my new washing machine. Great color, but hard to f… -``` - -For the first example, we’ll asses the sentiment of each review. In -order to do this we will call `llm_sentiment()`: - -``` r -reviews |> - llm_sentiment(review) -#> # A tibble: 3 × 2 -#> review .sentiment -#> -#> 1 This has been the best TV I've ever used. Great screen, and sound. positive -#> 2 I regret buying this laptop. It is too slow and the keyboard is to… negative -#> 3 Not sure how to feel about my new washing machine. Great color, bu… neutral -``` - -The function let’s us modify the options to choose from: - -``` r -reviews |> - llm_sentiment(review, options = c("positive", "negative")) -#> # A tibble: 3 × 2 -#> review .sentiment -#> -#> 1 This has been the best TV I've ever used. Great screen, and sound. positive -#> 2 I regret buying this laptop. It is too slow and the keyboard is to… negative -#> 3 Not sure how to feel about my new washing machine. Great color, bu… negative -``` - -As mentioned before, by being pipe friendly, the results from the LLM -prediction can be used in further transformations: - -``` r -reviews |> - llm_sentiment(review, options = c("positive", "negative")) |> - filter(.sentiment == "negative") -#> # A tibble: 2 × 2 -#> review .sentiment -#> -#> 1 I regret buying this laptop. It is too slow and the keyboard is to… negative -#> 2 Not sure how to feel about my new washing machine. Great color, bu… negative -``` - -### Summarize - -There may be a need to reduce the number of words in a given text. -Usually, to make it easier to capture its intent. To do this, use -`llm_summarize()`. This function has an argument to control the maximum -number of words to output (`max_words`): - -``` r -reviews |> - llm_summarize(review, max_words = 5) -#> # A tibble: 3 × 2 -#> review .summary -#> -#> 1 This has been the best TV I've ever used. Gr… it's a great tv -#> 2 I regret buying this laptop. It is too slow … laptop purchase was a mistake -#> 3 Not sure how to feel about my new washing ma… having mixed feelings about it -``` - -To control the name of the prediction field, you can change `pred_name` -argument. This works with the other `llm_` functions as well. - -``` r -reviews |> - llm_summarize(review, max_words = 5, pred_name = "review_summary") -#> # A tibble: 3 × 2 -#> review review_summary -#> -#> 1 This has been the best TV I've ever used. Gr… it's a great tv -#> 2 I regret buying this laptop. It is too slow … laptop purchase was a mistake -#> 3 Not sure how to feel about my new washing ma… having mixed feelings about it -``` - -### Classify - -Use the LLM to categorize the text into one of the options you provide: - -``` r -reviews |> - llm_classify(review, c("appliance", "computer")) -#> # A tibble: 3 × 2 -#> review .classify -#> -#> 1 This has been the best TV I've ever used. Gr… computer -#> 2 I regret buying this laptop. It is too slow … computer -#> 3 Not sure how to feel about my new washing ma… appliance -``` - -### Extract - -One of the most interesting operations. Using natural language, we can -tell the LLM to return a specific part of the text. In the following -example, we request that the LLM return the product being referred to. -We do this by simply saying “product”. The LLM understands what we -*mean* by that word, and looks for that in the text. - -``` r -reviews |> - llm_extract(review, "product") -#> # A tibble: 3 × 2 -#> review .extract -#> -#> 1 This has been the best TV I've ever used. Gr… tv -#> 2 I regret buying this laptop. It is too slow … laptop -#> 3 Not sure how to feel about my new washing ma… washing machine -``` - -### Translate - -As the title implies, this function will translate the text into a -specified language. What is really nice, it is that you don’t need to -specify the language of the source text. Only the target language needs -to be defined. The translation accuracy will depend on the LLM - -``` r -reviews |> - llm_translate(review, "spanish") -#> # A tibble: 3 × 2 -#> review .translation -#> -#> 1 This has been the best TV I've ever used. Gr… Esta ha sido la mejor televisió… -#> 2 I regret buying this laptop. It is too slow … Me arrepiento de comprar este p… -#> 3 Not sure how to feel about my new washing ma… No estoy seguro de cómo me sien… -``` - -### Custom prompt - -It is possible to pass your own prompt to the LLM, and have `mall` run -it against each text entry. Use `llm_custom()` to access this -functionality: - -``` r -my_prompt <- paste( - "Answer a question.", - "Return only the answer, no explanation", - "Acceptable answers are 'yes', 'no'", - "Answer this about the following text, is this a happy customer?:" -) - -reviews |> - llm_custom(review, my_prompt) -#> # A tibble: 3 × 2 -#> review .pred -#> -#> 1 This has been the best TV I've ever used. Great screen, and sound. Yes -#> 2 I regret buying this laptop. It is too slow and the keyboard is too noi… No -#> 3 Not sure how to feel about my new washing machine. Great color, but har… No -``` - -## Initialize session - -Invoking an `llm_` function will automatically initialize a model -selection if you don’t have one selected yet. If there is only one -option, it will pre-select it for you. If there are more than one -available models, then `mall` will present you as menu selection so you -can select which model you wish to use. - -Calling `llm_use()` directly will let you specify the model and backend -to use. You can also setup additional arguments that will be passed down -to the function that actually runs the prediction. In the case of -Ollama, that function is -[`chat()`](https://hauselin.github.io/ollama-r/reference/chat.html). - -``` r -llm_use("ollama", "llama3.2", seed = 100, temperature = 0) -``` - -## Key considerations - -The main consideration is **cost**. Either, time cost, or money cost. - -If using this method with an LLM locally available, the cost will be a -long running time. Unless using a very specialized LLM, a given LLM is a -general model. It was fitted using a vast amount of data. So determining -a response for each row, takes longer than if using a manually created -NLP model. The default model used in Ollama is [Llama -3.2](https://ollama.com/library/llama3.2), which was fitted using 3B -parameters. - -If using an external LLM service, the consideration will need to be for -the billing costs of using such service. Keep in mind that you will be -sending a lot of data to be evaluated. - -Another consideration is the novelty of this approach. Early tests are -providing encouraging results. But you, as an user, will still need to -keep in mind that the predictions will not be infallible, so always -check the output. At this time, I think the best use for this method, is -for a quick analysis. - -## Performance - -We will briefly cover this methods performance from two perspectives: - -- How long the analysis takes to run locally - -- How well it predicts - -To do so, we will use the `data_bookReviews` data set, provided by the -`classmap` package. For this exercise, only the first 100, of the total -1,000, are going to be part of this analysis. - -``` r -library(classmap) - -data(data_bookReviews) - -data_bookReviews |> - glimpse() -#> Rows: 1,000 -#> Columns: 2 -#> $ review "i got this as both a book and an audio file. i had waited t… -#> $ sentiment 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 2, 1, … -``` - -As per the docs, `sentiment` is a factor indicating the sentiment of the -review: negative (1) or positive (2) - -``` r -length(strsplit(paste(head(data_bookReviews$review, 100), collapse = " "), " ")[[1]]) -#> [1] 20470 -``` - -Just to get an idea of how much data we’re processing, I’m using a very, -very simple word count. So we’re analyzing a bit over 20 thousand words. - -``` r -reviews_llm <- data_bookReviews |> - head(100) |> - llm_sentiment( - col = review, - options = c("positive" ~ 2, "negative" ~ 1), - pred_name = "predicted" - ) -#> ! There were 2 predictions with invalid output, they were coerced to NA -``` - -As far as **time**, on my Apple M3 machine, it took about 1.5 minutes to -process, 100 rows, containing 20 thousand words. Setting `temp` to 0 in -`llm_use()`, made the model run faster. - -The package uses `purrr` to send each prompt individually to the LLM. -But, I did try a few different ways to speed up the process, -unsuccessfully: - -- Used `furrr` to send multiple requests at a time. This did not work - because either the LLM or Ollama processed all my requests serially. - So there was no improvement. - -- I also tried sending more than one row’s text at a time. This cause - instability in the number of results. For example sending 5 at a time, - sometimes returned 7 or 8. Even sending 2 was not stable. - -This is what the new table looks like: - -``` r -reviews_llm -#> # A tibble: 100 × 3 -#> review sentiment predicted -#> -#> 1 "i got this as both a book and an audio file… 1 1 -#> 2 "this book places too much emphasis on spend… 1 1 -#> 3 "remember the hollywood blacklist? the holly… 2 2 -#> 4 "while i appreciate what tipler was attempti… 1 1 -#> 5 "the others in the series were great, and i … 1 1 -#> 6 "a few good things, but she's lost her edge … 1 1 -#> 7 "words cannot describe how ripped off and di… 1 1 -#> 8 "1. the persective of most writers is shaped… 1 NA -#> 9 "i have been a huge fan of michael crichton … 1 1 -#> 10 "i saw dr. polk on c-span a month or two ago… 2 2 -#> # ℹ 90 more rows -``` - -I used `yardstick` to see how well the model performed. Of course, the -accuracy will not be of the “truth”, but rather the package’s results -recorded in `sentiment`. - -``` r -library(forcats) - -reviews_llm |> - mutate(predicted = as.factor(predicted)) |> - yardstick::accuracy(sentiment, predicted) -#> # A tibble: 1 × 3 -#> .metric .estimator .estimate -#> -#> 1 accuracy binary 0.980 -``` - -## Vector functions - -`mall` includes functions that expect a vector, instead of a table, to -run the predictions. This should make it easier to test things, such as -custom prompts or results of specific text. Each `llm_` function has a -corresponding `llm_vec_` function: - -``` r -llm_vec_sentiment("I am happy") -#> [1] "positive" -``` - -``` r -llm_vec_translate("Este es el mejor dia!", "english") -#> [1] "It's the best day!" -```