From 25c7a12693a5005c6ade5aafd665811e7afebfb1 Mon Sep 17 00:00:00 2001 From: Jeremy Bernstein <14089157+jxbz@users.noreply.github.com> Date: Wed, 20 Nov 2024 22:19:00 -0500 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1920879..7897e2c 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ Modula is a deep learning framework designed for graceful scaling. Neural networks written in Modula automatically transfer learning rate across scale. -We are slowly writing [the Modula docs](https://jeremybernste.in/modula/). Check them out for an accessible introduction to scaling theory and the Modula API. Also, here are some [slides](https://docs.google.com/presentation/d/1mCp6weIty9BzFFmx7LUGk2MPmNi-m-yKjigQ9wnycng/edit?usp=sharing) for a talk that Jeremy gave, that provide a more visual introduction to Modula. And here is [Modulax](https://github.com/GallagherCommaJack/modulax/) by Jack Gallagher. And here is a barebones implementation of [Modula in NumPy](https://colab.research.google.com/drive/1lKS15RJilGsstYP5JDQKSn3Z7TUUYIDQ?usp=sharing). +We are slowly writing [the Modula docs](https://docs.modula.systems/). Check them out for an accessible introduction to scaling theory and the Modula API. Also, here are some [slides](https://docs.google.com/presentation/d/1mCp6weIty9BzFFmx7LUGk2MPmNi-m-yKjigQ9wnycng/edit?usp=sharing) for a talk that Jeremy gave, that provide a more visual introduction to Modula. And here is [Modulax](https://github.com/GallagherCommaJack/modulax/) by Jack Gallagher. And here is a barebones implementation of [Modula in NumPy](https://colab.research.google.com/drive/1lKS15RJilGsstYP5JDQKSn3Z7TUUYIDQ?usp=sharing). Modula is an experimental framework based on our research paper: [Scalable Optimization in the Modular Norm](https://arxiv.org/abs/2405.14813).