Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(process-flight-data): update dependency mlflow to v2.18.0 #20556

Merged
merged 1 commit into from
Nov 18, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Nov 18, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
mlflow ==2.17.2 -> ==2.18.0 age adoption passing confidence

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

mlflow/mlflow (mlflow)

v2.18.0

Compare Source

We are excited to announce the release of MLflow 2.18.0! This release includes a number of significant features, enhancements, and bug fixes.

Python Version Update

Python 3.8 is now at an end-of-life point. With official support being dropped for this legacy version, MLflow now requires Python 3.9
as a minimum supported version.

Note: If you are currently using MLflow's ChatModel interface for authoring custom GenAI applications, please ensure that you
have read the future breaking changes section below.

Major New Features
  • 🦺 Fluent API Thread/Process Safety - MLflow's fluent APIs for tracking and the model registry have been overhauled to add support for both thread and multi-process safety. You are now no longer forced to use the Client APIs for managing experiments, runs, and logging from within multiprocessing and threaded applications. (#​13456, #​13419, @​WeichenXu123)

  • 🧩 DSPy flavor - MLflow now supports logging, loading, and tracing of DSPy models, broadening the support for advanced GenAI authoring within MLflow. Check out the MLflow DSPy Flavor documentation to get started! (#​13131, #​13279, #​13369, #​13345, @​chenmoneygithub, #​13543, #​13800, #​13807, @​B-Step62, #​13289, @​michael-berk)

  • 🖥️ Enhanced Trace UI - MLflow Tracing's UI has undergone
    a significant overhaul to bring usability and quality of life updates to the experience of auditing and investigating the contents of GenAI traces, from enhanced span content rendering using markdown to a standardized span component structure, (#​13685, #​13357, #​13242, @​daniellok-db)

  • 🚄 New Tracing Integrations - MLflow Tracing now supports DSPy, LiteLLM, and Google Gemini, enabling a one-line, fully automated tracing experience. These integrations unlock enhanced observability across a broader range of industry tools. Stay tuned for upcoming integrations and updates! (#​13801, @​TomeHirata, #​13585, @​B-Step62)

  • 📊 Expanded LLM-as-a-Judge Support - MLflow now enhances its evaluation capabilities with support for additional providers, including Anthropic, Bedrock, Mistral, and TogetherAI, alongside existing providers like OpenAI. Users can now also configure proxy endpoints or self-hosted LLMs that follow the provider API specs by using the new proxy_url and extra_headers options. Visit the LLM-as-a-Judge documentation for more details! (#​13715, #​13717, @​B-Step62)

  • ⏰ Environment Variable Detection - As a helpful reminder for when you are deploying models, MLflow now detects and reminds users of environment variables set during model logging, ensuring they are configured for deployment. In addition to this, the mlflow.models.predict utility has also been updated to include these variables in serving simulations, improving pre-deployment validation. (#​13584, @​serena-ruan)

Breaking Changes to ChatModel Interface
  • ChatModel Interface Updates - As part of a broader unification effort within MLflow and services that rely on or deeply integrate
    with MLflow's GenAI features, we are working on a phased approach to making a consistent and standard interface for custom GenAI
    application development and usage. In the first phase (planned for release in the next few releases of MLflow), we are marking
    several interfaces as deprecated, as they will be changing. These changes will be:

    • Renaming of Interfaces:
      • ChatRequestChatCompletionRequest to provide disambiguation for future planned request interfaces.
      • ChatResponseChatCompletionResponse for the same reason as the input interface.
      • metadata fields within ChatRequest and ChatResponsecustom_inputs and custom_outputs, respectively.
    • Streaming Updates:
      • predict_stream will be updated to enable true streaming for custom GenAI applications. Currently, it returns a generator with synchronous outputs from predict. In a future release, it will return a generator of ChatCompletionChunks, enabling asynchronous streaming. While the API call structure will remain the same, the returned data payload will change significantly, aligning with LangChain’s implementation.
    • Legacy Dataclass Deprecation:
      • Dataclasses in mlflow.models.rag_signatures will be deprecated, merging into unified ChatCompletionRequest, ChatCompletionResponse, and ChatCompletionChunks.

Other Features:

Bug fixes:

  • [Database] Cascade deletes to datasets when deleting experiments to fix a bug in MLflow's gc command when deleting experiments with logged datasets (#​13741, @​daniellok-db)
  • [Models] Fix a bug with Langchain's pyfunc predict input conversion (#​13652, @​serena-ruan)
  • [Models] Fix signature inference for subclasses and Optional dataclasses that define a model's signature (#​13440, @​bbqiu)
  • [Tracking] Fix an issue with async logging batch splitting validation rules (#​13722, @​WeichenXu123)
  • [Tracking] Fix an issue with LangChain's autologging thread-safety behavior (#​13672, @​B-Step62)
  • [Tracking] Disable support for running spark autologging in a threadpool due to limitations in Spark (#​13599, @​WeichenXu123)
  • [Tracking] Mark role and index as required for chat schema (#​13279, @​chenmoneygithub)
  • [Tracing] Handle raw response in openai autolog (#​13802, @​harupy)
  • [Tracing] Fix a bug with tracing source run behavior when running inference with multithreading on Langchain models (#​13610, @​WeichenXu123)

Documentation updates:

Small bug fixes and documentation updates:

#​13775, #​13768, #​13764, #​13744, #​13699, #​13742, #​13703, #​13669, #​13682, #​13569, #​13563, #​13562, #​13539, #​13537, #​13533, #​13408, #​13295, @​serena-ruan; #​13768, #​13764, #​13761, #​13738, #​13737, #​13735, #​13734, #​13723, #​13726, #​13662, #​13692, #​13689, #​13688, #​13680, #​13674, #​13666, #​13661, #​13625, #​13460, #​13626, #​13546, #​13621, #​13623, #​13603, #​13617, #​13614, #​13606, #​13600, #​13583, #​13601, #​13602, #​13604, #​13598, #​13596, #​13597, #​13531, #​13594, #​13589, #​13581, #​13112, #​13587, #​13582, #​13579, #​13578, #​13545, #​13572, #​13571, #​13564, #​13559, #​13565, #​13558, #​13541, #​13560, #​13556, #​13534, #​13386, #​13532, #​13385, #​13384, #​13383, #​13507, #​13523, #​13518, #​13492, #​13493, #​13487, #​13490, #​13488, #​13449, #​13471, #​13417, #​13445, #​13430, #​13448, #​13443, #​13429, #​13418, #​13412, #​13382, #​13402, #​13381, #​13364, #​13356, #​13309, #​13313, #​13334, #​13331, #​13273, #​13322, #​13319, #​13308, #​13302, #​13268, #​13298, #​13296, @​harupy; #​13705, @​williamjamir; #​13632, @​shichengzhou-db; #​13755, #​13712, #​13260, @​BenWilson2; #​13745, #​13743, #​13697, #​13548, #​13549, #​13577, #​13349, #​13351, #​13350, #​13342, #​13341, @​WeichenXu123; #​13807, #​13798, #​13787, #​13786, #​13762, #​13749, #​13733, #​13678, #​13721, #​13611, #​13528, #​13444, #​13450, #​13360, #​13416, #​13415, #​13336, #​13305, #​13271, @​B-Step62; #​13808, #​13708, @​smurching; #​13739, @​fedorkobak; #​13728, #​13719, #​13695, #​13677, @​TomeHirata; #​13776, #​13736, #​13649, #​13285, #​13292, #​13282, #​13283, #​13267, @​daniellok-db; #​13711, @​bhavya2109sharma; #​13693, #​13658, @​aravind-segu; #​13553, @​dsuhinin; #​13663, @​gitlijian; #​13657, #​13629, @​parag-shendye; #​13630, @​JohannesJungbluth; #​13613, @​itepifanio; #​13480, @​agjendem; #​13627, @​ilyaresh; #​13592, #​13410, #​13358, #​13233, @​nojaf; #​13660, #​13505, @​sunishsheth2009; #​13414, @​lmoros-DB; #​13399, @​Abubakar17; #​13390, @​KekmaTime; #​13291, @​michael-berk; #​12511, @​jgiannuzzi; #​13265, @​Ahar28; #​13785, @​Rick-McCoy; #​13676, @​hyolim-e; #​13718, @​annzhang-db; #​13705, @​williamjamir


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

Copy link

sonarcloud bot commented Nov 18, 2024

@mergify mergify bot merged commit 0f81393 into main Nov 18, 2024
148 checks passed
@mergify mergify bot deleted the renovate/process-flight-data-mlflow-2.x branch November 18, 2024 23:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants