Skip to content

Commit

Permalink
update links to reflect the new project.
Browse files Browse the repository at this point in the history
Signed-off-by: Eric Pugh <[email protected]>
  • Loading branch information
epugh committed Sep 18, 2024
1 parent 53fcd3e commit 49c6938
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion _search-plugins/ltr/core-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Rank?](http://opensourceconnections.com/blog/2017/02/24/what-is-learning-to-rank

Judgement lists, sometimes referred to as "golden sets", grade
individual search results for a keyword search. For example, our
[demo](http://github.com/o19s/elasticsearch-learning-to-rank/tree/master/demo/)
[demo](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/)
uses [TheMovieDB](http://themoviedb.org). When users search for
"Rambo" we can indicate which movies ought to come back for "Rambo"
based on our user's expectations of search.
Expand Down
2 changes: 1 addition & 1 deletion _search-plugins/ltr/logging-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ every feature-query to retrieve the scores of features.

For the sake of discussing logging, let's say we created a feature set
like so that works with the TMDB data set from the
[demo](https://github.com/o19s/OpenSearch-learning-to-rank/tree/master/demo):
[demo](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/):

```json
PUT _ltr/_featureset/more_movie_features
Expand Down
10 changes: 5 additions & 5 deletions _search-plugins/ltr/training-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ an extensive overview) and then dig into uploading a model.
## RankLib training

We provide two demos for training a model. A fully-fledged [RankLib
Demo](http://github.com/o19s/elasticsearch-learning-to-rank/tree/master/demo)
Demo](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/)
uses RankLib to train a model from OpenSearch queries. You can see
how features are
[logged](http://github.com/o19s/elasticsearch-learning-to-rank-learning-to-rank/tree/master/demo/collectFeatures.py)
[logged](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/collectFeatures.py)
and how models are
[trained](http://github.com/o19s/elasticsearch-learning-to-rank-learning-to-rank/tree/master/demo/train.py)
[trained](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/train.py)
. In particular, you\'ll note that logging create a RankLib consumable
judgment file that looks like:

Expand All @@ -35,7 +35,7 @@ judgment file that looks like:
Here for query id 1 (Rambo) we've logged features 1 (a title `TF*IDF`
score) and feature 2 (a description `TF*IDF` score) for a set of
documents. In
[train.py](http://github.com/o19s/elasticsearch-learning-to-rank/demo/train.py)
[train.py](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/train.py)
you'll see how we call RankLib to train one of it's supported models
on this line:

Expand Down Expand Up @@ -71,7 +71,7 @@ set). RankLib does not use feature names when training.
## XGBoost example

There's also an example of how to train a model [using
XGBoost](http://github.com/o19s/elasticsearch-learning-to-rank/tree/master/demo/xgboost-demo).
XGBoost](http://github.com/opensearch-project/opensearch-learning-to-rank-base/tree/main/demo/xgboost-demo).
Examining this demo, you'll see the difference in how RankLib is
executed compared to XGBoost. XGBoost will output a serialization format for
gradient boosted decision tree that looks like:
Expand Down

0 comments on commit 49c6938

Please sign in to comment.