Skip to content

Commit

Permalink
🌈 New Release: ml5.js v0.3.0 πŸŽ‰ (#386)
Browse files Browse the repository at this point in the history
* add ImageData as valid image type

* add CVAE

* add latent dim

* add random generate

* fix p5Image support

* fix CVAE parameter

* Added a parameter to the save function so that it is possible to add a custom filename to the model that is saved.

* Unet fix (#357)

Add uNet model and additional fixes

- adds uNet model from @zaidalyafeai ✨
- adds preload() for uNet
- uses loadImage on window.loadImage vs. window.p5.loadImage

* Added sentiment analysis (#339)

* Added sentiment analysis

* delete files

* fixed issues for pull request

* add p5 utils (#358)

* fix charRNN tests (#349)

* add tests to CharRNN

* test(CharRNN): add tests to CharRNN

added descriptive tests to ensure CharRNN behaves like its example

* remove dist

* Add tests to CharRNN (#307)

* add tests to CharRNN

* test(CharRNN): add tests to CharRNN

added descriptive tests to ensure CharRNN behaves like its example

* remove dist

* check preload support for other nets and classifiers (#313)

Adds specified nets to support preload // TODO: add examples showing appropriate use of preload

* change CharRNN specs to meet time limit, add initial code for videoClassifier

* videoClassifier functioning

* charRNN functional

* fix out of date file

* add preload support for cvae (#360)

* Update TensorFlow.js to 1.0.2 (#336)

* upgrade to tfjs1.0.0

* fix loadModel

* fix buffer

* fix getLayer

* Adds fixes to PR #332 for tfjs 1.0.2 updates (#366)

* upgrade to tfjs1.0.0

* fix loadModel

* fix buffer

* fix getLayer

* updated package lock

* added @tensorflow/tfjs-core as dependency

* add graphmodel for infer (#365)

* Add DCGAN Model into ml5 (#351)

* Create index.js

* updated index.js and DCGAN/index.js

* DCGAN updates and fixes (#362)

* Create index.js

* fixed DCGAN errors

* updates p5Utils destructuring, fixes linting issues, and updates tfjs to 1.0.2 to match dcgan reqs

* fixed cvae

* use this.model instead of using model as param to this.compute()

* Makes UNET compatible with tfjs 1.0.2 (#367)

* added package-lock

* updated UNET for use with tfjs 1.0.2

* Makes Sentiment compatible with tfjs 1.0.2 (#368)

* added package-lock

* rm sentiment-node

* changed loadModel to loadLayersModel

* Makes CVAE compatible with tfjs 1.0.2 (#369)

* added package-lock

* updates cvae to tfjs 1.0.2 api

* update tfjs to 1.1.2 (#373)

* featureExtractor: accept HTML canvas or p5 canvas when addImage(), classify() or predict()

* fix: KNNClassifier accepts a number as class index when addExample(features, number)

* added check for moz browser ref:https://stackoverflow.com/questions/48623376/typeerror-capturestream-is-not-a-function (#375)

This addresses the video capture breaking in YOLO and potentially other video based functions that require the use of .captureStream(). As the .captureStream() function is still experimental, this adds the moz prefix and a browser check to see if we are using firefox or not.

* rm todo

* updated package-lock.json

* Adds label number option to featureExtractor.classification()  (#376)

* changed numClasses to numLabels

* added num label as option to classification()

* updated FeatureExtractor Test with numLabels

* adds object as param to .classificaiton()

* moved options into this.config

* fix feature extractor test - add .config

* added pose:poseWithParts into .singlePose() (#381)

* Adds jsdoc inline-documentation - work in progress (#378)

* added jsdoc documentation for imageClassifier

* adds dcgan documentation - needs checking

* Add jsdoc (#382)

* Add jsdocs for CharRNN

* Add jsdocs for CVAE

* Add jsdocs for FeatureExtractor

* Add jsdocs for KNN

* Add jsdocs for PitchDetection

* Add jsdocs for Pix2pix

* Add jsdocs for posenet

* Add jsdocs for Sentiment

* Add jsdocs for styletransfer

* add linebreaks to long lines

* added basic docs to sketchRnn

* added basic docs to unet

* added basic docs to word2vec

* added basic yolo docs

* Adds V0.3.0 to package.json and Readme for new release (#385)

* changed package.json to v0.3.0

* added latest version reference in readme

* added lib min - will remove after this release
  • Loading branch information
joeyklee authored May 24, 2019
1 parent 03bab68 commit b4a0d76
Show file tree
Hide file tree
Showing 30 changed files with 1,248 additions and 9,568 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.env
dev
examples/es6/node_modules
experiments/node_modules
Expand All @@ -19,4 +20,4 @@ website/node_modules
website/i18n/*
!website/i18n/en.json

yarn-error.log
yarn-error.log
11 changes: 8 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,20 @@ ml5.js is heavily inspired by [Processing](https://processing.org/) and [p5.js](

There are several ways you can use the ml5.js library:

* You can use the latest version (0.2.3) by adding it to the head section of your HTML document:
* You can use the latest version (0.3.0) by adding it to the head section of your HTML document:

**v0.2.3**
**v0.3.0**
```javascript
<script src="https://unpkg.com/ml5@0.2.3/dist/ml5.min.js" type="text/javascript"></script>
<script src="https://unpkg.com/ml5@0.3.0/dist/ml5.min.js" type="text/javascript"></script>
```

* If you need to use an earlier version for any reason, you can change the version number.

**v0.2.3**
```javascript
<script src="https://unpkg.com/[email protected]/dist/ml5.min.js" type="text/javascript"></script>
```

**v0.1.3**

```javascript
Expand Down
53 changes: 10 additions & 43 deletions dist/ml5.min.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion dist/ml5.min.js.map

Large diffs are not rendered by default.

272 changes: 97 additions & 175 deletions package-lock.json

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "ml5",
"version": "0.2.3",
"version": "0.3.0",
"description": "A friendly machine learning library for the web.",
"main": "dist/ml5.min.js",
"directories": {
Expand Down Expand Up @@ -88,11 +88,11 @@
]
},
"dependencies": {
"@magenta/sketch": "0.1.2",
"@tensorflow-models/mobilenet": "0.2.2",
"@tensorflow-models/posenet": "0.2.2",
"@tensorflow-models/knn-classifier": "0.2.2",
"@tensorflow/tfjs": "0.13.0",
"@magenta/sketch": "0.2.0",
"@tensorflow-models/knn-classifier": "1.0.0",
"@tensorflow-models/mobilenet": "1.0.0",
"@tensorflow-models/posenet": "1.0.0",
"@tensorflow/tfjs": "1.1.2",
"events": "^3.0.0"
}
}
140 changes: 140 additions & 0 deletions src/CVAE/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
// Copyright (c) 2018 ml5
//
// This software is released under the MIT License.
// https://opensource.org/licenses/MIT

/* eslint prefer-destructuring: ["error", {AssignmentExpression: {array: false}}] */
/* eslint no-await-in-loop: "off" */
/*
* CVAE: Run conditional auto-encoder for pro-trained model
*/

import * as tf from '@tensorflow/tfjs';
import callCallback from '../utils/callcallback';

class Cvae {
/**
* Create a Conditional Variational Autoencoder (CVAE).
* @param {String} modelPath - Required. The url path to your model.
* @param {function} callback - Required. A function to run once the model has been loaded.
*/
constructor(modelPath, callback) {
/**
* Boolean value that specifies if the model has loaded.
* @type {boolean}
* @public
*/
this.ready = false;
this.model = {};
this.latentDim = tf.randomUniform([1, 16]);
this.modelPath = modelPath;
this.modelPathPrefix = '';

this.jsonLoader().then(val => {
this.modelPathPrefix = this.modelPath.split('manifest.json')[0]
this.ready = callCallback(this.loadCVAEModel(this.modelPathPrefix+val.model), callback);
this.labels = val.labels;
// get an array full of zero with the length of labels [0, 0, 0 ...]
this.labelVector = Array(this.labels.length+1).fill(0);
});
}

// load tfjs model that is converted by tensorflowjs with graph and weights
async loadCVAEModel(modelPath) {
this.model = await tf.loadLayersModel(modelPath);
return this;
}

/**
* Generate a random result.
* @param {String} label - A label of the feature your want to generate
* @param {function} callback - A function to handle the results of ".generate()". Likely a function to do something with the generated image data.
* @return {raw: ImageData, src: Blob, image: p5.Image}
*/
async generate(label, callback) {
return callCallback(this.generateInternal(label), callback);
}

loadAsync(url){
return new Promise((resolve, reject) => {
if(!this.ready) reject();
loadImage(url, (img) => {
resolve(img);
});
});
};

getBlob(inputCanvas) {
return new Promise((resolve, reject) => {
if (!this.ready) reject();

inputCanvas.toBlob((blob) => {
resolve(blob);
});
});
}

checkP5() {
if (typeof window !== 'undefined' && window.p5 && this
&& window.p5.Image && typeof window.p5.Image === 'function') return true;
return false;
}

async generateInternal(label) {
const res = tf.tidy(() => {
this.latentDim = tf.randomUniform([1, 16]);
const cursor = this.labels.indexOf(label);
if (cursor < 0) {
console.log('Wrong input of the label!');
return [undefined, undefined]; // invalid input just return;
}

this.labelVector = this.labelVector.map(() => 0); // clear vector
this.labelVector[cursor+1] = 1;

const input = tf.tensor([this.labelVector]);

const temp = this.model.predict([this.latentDim, input]);
return temp.reshape([temp.shape[1], temp.shape[2], temp.shape[3]]);
});

const raws = await tf.browser.toPixels(res);

const canvas = document.createElement('canvas'); // consider using offScreneCanvas
const ctx = canvas.getContext('2d');
const [x, y] = res.shape;
canvas.width = x;
canvas.height = y;
const imgData = ctx.createImageData(x, y);
const data = imgData.data;
for (let i = 0; i < x * y * 4; i += 1) data[i] = raws[i];
ctx.putImageData(imgData, 0, 0);

const src = URL.createObjectURL(await this.getBlob(canvas));
let image;
/* global loadImage */
if (this.checkP5()) image = await this.loadAsync(src);
return { src, raws, image };
}

async jsonLoader() {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.open('GET', this.modelPath);

xhr.onload = () => {
const json = JSON.parse(xhr.responseText);
resolve(json);
};
xhr.onerror = (error) => {
reject(error);
};
xhr.send();
});
}
}

const CVAE = (model, callback) => new Cvae(model, callback);


export default CVAE;
62 changes: 60 additions & 2 deletions src/CharRNN/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,35 @@ const regexWeights = /weights|weight|kernel|kernels|w/gi;
const regexFullyConnected = /softmax/gi;

class CharRNN {
/**
* Create a CharRNN.
* @param {String} modelPath - The path to the trained charRNN model.
* @param {function} callback - Optional. A callback to be called once
* the model has loaded. If no callback is provided, it will return a
* promise that will be resolved once the model has loaded.
*/
constructor(modelPath, callback) {
/**
* Boolean value that specifies if the model has loaded.
* @type {boolean}
* @public
*/
this.ready = false;

/**
* The pre-trained charRNN model.
* @type {model}
* @public
*/
this.model = {};
this.cellsAmount = 0;
this.cells = [];
this.zeroState = { c: [], h: [] };
/**
* The vocabulary size (or total number of possible characters).
* @type {c: Array, h: Array}
* @public
*/
this.state = { c: [], h: [] };
this.vocab = {};
this.vocabSize = 0;
Expand Down Expand Up @@ -128,7 +151,7 @@ class CharRNN {
let probabilitiesNormalized = []; // will contain final probabilities (normalized)

for (let i = 0; i < userInput.length + length + -1; i += 1) {
const onehotBuffer = tf.buffer([1, this.vocabSize]);
const onehotBuffer = await tf.buffer([1, this.vocabSize]);
onehotBuffer.set(1.0, 0, input);
const onehot = onehotBuffer.toTensor();
let output;
Expand Down Expand Up @@ -174,17 +197,45 @@ class CharRNN {
};
}

/**
* Reset the model state.
*/
reset() {
this.state = this.zeroState;
}

/**
* @typedef {Object} options
* @property {String} seed
* @property {number} length
* @property {number} temperature
*/

// stateless
/**
* Generates content in a stateless manner, based on some initial text
* (known as a "seed"). Returns a string.
* @param {options} options - An object specifying the input parameters of
* seed, length and temperature. Default length is 20, temperature is 0.5
* and seed is a random character from the model. The object should look like
* this:
* @param {function} callback - Optional. A function to be called when the model
* has generated content. If no callback is provided, it will return a promise
* that will be resolved once the model has generated new content.
*/
async generate(options, callback) {
this.reset();
return callCallback(this.generateInternal(options), callback);
}

// stateful
/**
* Predict the next character based on the model's current state.
* @param {number} temp
* @param {function} callback - Optional. A function to be called when the
* model finished adding the seed. If no callback is provided, it will
* return a promise that will be resolved once the prediction has been generated.
*/
async predict(temp, callback) {
let probabilitiesNormalized = [];
const temperature = temp > 0 ? temp : 0.1;
Expand Down Expand Up @@ -212,6 +263,13 @@ class CharRNN {
};
}

/**
* Feed a string of characters to the model state.
* @param {String} inputSeed - A string to feed the charRNN model state.
* @param {function} callback - Optional. A function to be called when
* the model finished adding the seed. If no callback is provided, it
* will return a promise that will be resolved once seed has been fed.
*/
async feed(inputSeed, callback) {
await this.ready;
const seed = Array.from(inputSeed);
Expand All @@ -223,7 +281,7 @@ class CharRNN {

let input = encodedInput[0];
for (let i = 0; i < seed.length; i += 1) {
const onehotBuffer = tf.buffer([1, this.vocabSize]);
const onehotBuffer = await tf.buffer([1, this.vocabSize]);
onehotBuffer.set(1.0, 0, input);
const onehot = onehotBuffer.toTensor();
let output;
Expand Down
14 changes: 7 additions & 7 deletions src/CharRNN/index_test.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ const RNN_MODEL_URL = 'https://raw.githubusercontent.com/ml5js/ml5-data-and-mode

const RNN_MODEL_DEFAULTS = {
cellsAmount: 2,
vocabSize: 64
vocabSize: 223
};

const RNN_DEFAULTS = {
Expand All @@ -21,15 +21,15 @@ const RNN_DEFAULTS = {

const RNN_OPTIONS = {
seed: 'the meaning of pizza is: ',
length: 100,
length: 30,
temperature: 0.7
}

describe('charRnn', () => {
let rnn;

beforeAll(async () => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 120000; //set extra long interval due to issues with CharRNN generation time
jasmine.DEFAULT_TIMEOUT_INTERVAL = 20000; //set extra long interval due to issues with CharRNN generation time
rnn = await charRNN(RNN_MODEL_URL, undefined);
});

Expand All @@ -52,9 +52,9 @@ describe('charRnn', () => {
expect(result.sample.length).toBe(20);
});

// it('generates content that follows the set options', async() => {
// const result = await rnn.generate(RNN_OPTIONS);
// expect(result.sample.length).toBe(100);
// });
it('generates content that follows the set options', async() => {
const result = await rnn.generate(RNN_OPTIONS);
expect(result.sample.length).toBe(30);
});
});
});
Loading

0 comments on commit b4a0d76

Please sign in to comment.