Skip to content

Releases: hand-e-fr/OpenHosta

v1.2.1

31 Oct 08:08
218a0a4
Compare
Choose a tag to compare

Fixes

  • OpenHosta now can't be used with Python 3.13 due to the incompatibility with pytorch

v2.0-beta1

24 Oct 09:26
Compare
Choose a tag to compare
v2.0-beta1 Pre-release
Pre-release

GENERAL REFACTO

v1.2.0

14 Oct 09:48
5b52cdd
Compare
Choose a tag to compare

v1.2.0 - 10/14/2024

New Features

  • predict Function
    The predict function is now available, allowing you to create internal models—currently supporting linear regression—based on user-provided training data. This simplifies model generation without relying on external APIs. Key functionalities include:

    • .retrain: Retrain models with specified parameters.
    • .continue_train: Continue training with existing model weights.
    • .emulate: Run predictions through an LLM or create a model directly using internal linear regression based on training data.
  • TrainingSet Management
    Manage training datasets effortlessly with new tools:

    • .visualize: Inspect current data visually.
    • .add: Add new examples.

Enhancements

  • Expanded Dataset Support:
    load_training_example (previously load_examples) supports JSON, JSONL, and CSV formats for easier integration.

  • Verbose Mode in predict:
    Track detailed model training and define target losses with get_loss.

v1.1.2

14 Oct 07:27
Compare
Choose a tag to compare
  • Fixes
    • Remove debug prints

v1.2-rc1

10 Oct 14:50
cb66278
Compare
Choose a tag to compare
v1.2-rc1 Pre-release
Pre-release

v1.2rc-1 10/10/2024

New Features

  1. TrainingSet Management
    Manage training datasets effortlessly with new tools:

    • .visualize: Inspect current data visually.
    • .add: Add new examples.
  2. Enhanced predict Attributes
    New functionalities for predict:

    • .retrain: Retrain models with specified parameters.
    • .continue_train: Continue training with existing weights.
    • .emulate: Run predictions through an LLM.

Enhancements

  • Expanded Dataset Support: load_training_example (previously load_examples) supports JSON, JSONL, and CSV formats for easier integration.

  • Verbose Mode in predict: Track detailed model training and set target losses with get_loss.

Fixes

  • CUDA Compatibility: predict now works with CUDA-enabled GPUs (device ID selection pending).

v1.1.1

09 Oct 07:11
Compare
Choose a tag to compare

OpenHosta v1.1.1 - 10/07/24

This patch focuses on performance enhancements and bug fixes related to prompt handling and output typing.

Features

  • Introduced the _last_request attribute to the Model object for enhanced tracking.

Optimization

  • Streamlined the emulate user prompt by limiting it to the function call string and moving additional information to the system prompt.
  • Enhanced the structure of the emulate prompt with markdown headers.
  • Eliminated unnecessary sentences such as confidence levels and transitions for a more concise output.

Internal

  • Refactored the prompt-building function within emulate for improved maintainability.
  • Removed the validate function in the LLM call response handler to simplify the codebase.

Fixes

  • Added code block syntax for clearer function definitions.
  • Re-added the diagramm attribute (now deprecated) for backward compatibility.
  • Explicitly included a neutral response (None) in the emulate prompt to handle edge cases.

Instructions for Update

To update to this version, use the following command:

pip install --upgrade OpenHosta

Remember to delete __hostacache__ folders to avoid version conflicts.

v1.2-beta4

04 Oct 18:06
Compare
Choose a tag to compare
v1.2-beta4 Pre-release
Pre-release

v1.2beta-4

Features

  • TrainingSet
    The TrainingSet feature is now available! It allows you to manage datasets for hosta_injected functions with the following tools:

    • .visualize: Visualize the dataset (examples and data) for a specific function and get the list of them.
    • .add: Add new data into ho_data.
    • .generate: Create data by calling an LLM.
  • New Attributes in Predict Functions
    The Predict function now includes these new attributes:

    • .retrain: Retrain the model.
    • .continue_train: Continue training using the previous weights.
    • .emulate: Run the prediction function in emulation mode with a model.

Enhancements

  • load_examples has been renamed to load_training_example and now supports datasets in JSON, JSONL, and CSV formats for storage in hosta_cache.
  • The Predict function now supports a verbose mode for clearer model operations and includes a get_loss attribute to specify a loss to obtain !

Full Changelog: v1.2-beta3...v1.2-beta4

v1.2-beta3

30 Sep 12:06
Compare
Choose a tag to compare
v1.2-beta3 Pre-release
Pre-release

v1.2beta-3

  • Fixes

    • predict is now CUDA compatible (currently, CUDA(id) cannot be chosen).
  • Enhancements

    • mergenow merge with 1.1

What's Changed

v1.1.0

27 Sep 11:30
Compare
Choose a tag to compare

OpenHosta v1.1.0 - 09/27/2024

This release introduces significant new features, enhancements, optimizations, and bug fixes to improve the functionality and performance of OpenHosta.

New Features

  • example Function: Adds examples for a specified hosta-injected function and caches them.
  • save_examples Function: Saves all examples of a hosta-injected function in a JSONL file.
  • load_examples Function: Loads an example file into a cache for a hosta-injected function.
  • set_prompt in PromptManager: Automatically changes a prompt in "prompt.json".
  • suggest function: Works the same as the __suggest__ attributs but in a function

Changes

  • suggest attribute: Fixed diagramm to diagram.

Enhancements

  • thought Function: Automatically guesses its return type with LLM.
  • suggest Function: Returns data from LLM in a dictionary.
  • emulate Function:
    • Now works for class methods.
    • Integrates local variables of the emulated function into the LLM prompt.
    • Includes a verification output to check and convert the output to the specified type if necessary. Supported types include Pydantic models, types from the typing module (List, Dict, Tuple, etc.), and built-in types (dict, int, float, etc.).
    • Added _last_request attribute to access the prompt sent to the LLM and _last_response to acces the raw message received.
  • Support for Typing Module: Allows specific return types from the typing module, including List, Dict, Tuple, Set, FrozenSet, Deque, Iterable, Sequence, Mapping, Union, Optional, Literal.
  • Return Type Prediction: The return type predicted by thought is now attached as an attribute (_return_type).

Optimizations

  • emulate Prompt: Removed confidence level, resulting in ~20% speed improvement.
  • Enhanced Prompts: Added code blocks, separators, and references to the processed function name for emulate, enhance, thought, and estimate.

Bug Fixes

  • emulate Function:
    • Now decorator-resistant and works in nested functions.
    • now works when emulated function is called inside another one
    • works now with lru_cache decorator from functools module
  • config.set_default_model: Works correctly with config.set_default_apiKey.
  • thought Function: Now accepts multiple arguments.
  • suggest and analytics Functions: Now call LLM with Model class (ai_call).
  • Flask Compatibility: Added compatibility with Flask.
  • Added a loop to find the frame in _extend_scope

Internal Improvements

  • Functional Tests: Added for each library feature.
  • Custom Exception Classes: Added for request and frame errors.
  • Makefile: Added for cleaning, packaging, and testing.
  • GitHub Workflows: Added for linting, formatting, and testing on push to dev and main branches.

Documentation

  • Google Cobal Link: Integrated into the documentation.
  • Inconsistencies and Errors: Corrected various documentation issues.

Performance Metrics

  • emulate Function: Execution time decreased by ~20% despite a slight performance drop since version v1.0.2.
  • thought Function: Execution time increased significantly due to the type prediction system, resulting in a double LLM call.
  • Caching System: No impact on the execution time of OpenHosta functions.

Instructions for Update

To update to this version, use the following command:

pip install --upgrade OpenHosta

Remember to delete __hostacache__ folders to avoid version conflicts.

Acknowledgements

Thank you to all contributors for your valuable feedback and contributions: @ramosleandre, @MerlinDEVILLARD, @WilliamJlvt, and @battmanux.


v1.1-rc4

27 Sep 07:30
a944382
Compare
Choose a tag to compare
v1.1-rc4 Pre-release
Pre-release

v1.1-rc4

  • Feature

    • Added suggest function. Works the same as the __suggest__ attributs but in a function
  • Doc

    • Many inconsistencies and errors corrected.
  • Internal

    • Added Github workflows for linting, formating and testing when pushing to dev and main
  • Fixes

    • suggest attribute diagramm is now diagram

What's Changed

Full Changelog: v1.1-rc3...v1.1-rc4