2021-07-26 13:48:21 -07:00
2021-07-26 13:34:01 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:32:11 -07:00
2021-07-26 13:30:42 -07:00
2021-07-26 13:48:21 -07:00

LIBMF Rust

LIBMF - large-scale sparse matrix factorization - for Rust

Build Status

Installation

Add this line to your applications Cargo.toml under [dependencies]:

libmf = { version = "0.1" }

Getting Started

Prep your data in the format row_index, column_index, value

let mut data = libmf::Matrix::new();
data.push(0, 0, 5.0);
data.push(0, 2, 3.5);
data.push(1, 1, 4.0);

Create a model

let mut model = libmf::Model::new();
model.fit(&data);

Make predictions

model.predict(row_index, column_index);

Get the latent factors (these approximate the training matrix)

model.p_factors();
model.q_factors();

Get the bias (average of all elements in the training matrix)

model.bias();

Save the model to a file

model.save("model.txt");

Load the model from a file

let model = libmf::Model::load("model.txt");

Pass a validation set

model.fit_eval(&train_set, &eval_set);

Cross-Validation

Perform cross-validation

model.cv(&data, 5);

Metrics

Calculate RMSE

model.rmse(&data);

Calculate MAE

model.mae(&data);

Parameters

Set parameters - default values below

model.loss = 0;                // loss function
model.factors = 8;             // number of latent factors
model.threads = 12;            // number of threads used
model.bins = 25;               // number of bins
model.iterations = 20;         // number of iterations
model.lambda_p1 = 0;           // coefficient of L1-norm regularization on P
model.lambda_p2 = 0.1;         // coefficient of L2-norm regularization on P
model.lambda_q1 = 0;           // coefficient of L1-norm regularization on Q
model.lambda_q2 = 0.1;         // coefficient of L2-norm regularization on Q
model.learning_rate = 0.1;     // learning rate
model.alpha = 0.1;             // importance of negative entries
model.c = 0.0001;              // desired value of negative entries
model.nmf = false;             // perform non-negative MF (NMF)
model.quiet = false;           // no outputs to stdout

Loss Functions

For real-valued matrix factorization

  • 0 - squared error (L2-norm)
  • 1 - absolute error (L1-norm)
  • 2 - generalized KL-divergence

For binary matrix factorization

  • 5 - logarithmic error
  • 6 - squared hinge loss
  • 7 - hinge loss

For one-class matrix factorization

  • 10 - row-oriented pair-wise logarithmic loss
  • 11 - column-oriented pair-wise logarithmic loss
  • 12 - squared error (L2-norm)

Reference

Specify the initial capacity for a matrix

let mut data = libmf::Matrix::with_capacity(3);

Resources

History

View the changelog

Contributing

Everyone is encouraged to help improve this project. Here are a few ways you can help:

To get started with development:

git clone --recursive https://github.com/ankane/libmf-rust.git
cd libmf-rust
cargo test
Description
No description provided
Readme 91 KiB
Languages
Rust 100%