Ended 4 years ago
269 participants
3098 submissions

Materials (1 MB)

Download all materials
retailhero-recommender.zip
Baseline solution code and check queries data
1 MB

Data

  • clients.csv: general info about clients;
  • products.csv: general info about stock items;
  • purchases.csv: clients’ purchase history prior to communication.

Participants are provided with a query dataset check_queries.tsv, based on the data above. All queries in the problem were made later than 2019-Mar-01. The file check_queries.tsv is in the following format:

<query> \\t <next_transaction> \\n
<query> \\t <next_transaction> \\n
...

Where <query> is a request’s body (see section “Solutions API” below), and <next_transaction> contains information regarding the next purchase (after the request was made). This data is used to calculate the precision of the ranked list of SKUs the service returned.

Test queries are made with users not present in the general dataset.

Solutions API

The solution should be made in the form of HTTP-server listening on the port 8000. Solution should provide two endpoints:

GET /ready

This endpoint should respond with 200 OK in response once the service is up and running. Any other response code means that the service is not yet prepared to receive requests. There’s a hard time limit on startup time (see “Technical limitations” below). One can use startup time to load data from the disk and create all necessary data structures in RAM.

POST /recommend

This endpoint responds with a list of recommended SKUs. The incoming request is of the type application/json and contains information about the client and client’s purchase history. The web service should respond in JSON format with required header Content-Type: application/json.

Quality Criterion Explained

The quality of submission is calculated using MNAP@30: an average precision calculated for all test requests and normalized aftervards.

$$\text{MNAP} = \frac{1}{|Q|} \sum_{q \in Q} \frac{AP(q)}{\text{IdealAP}(q)}$$

$$\text{AP}(q) = \frac{1}{30} \sum_{k=1}^{30} \text{Precision}@k(q)$$

Here \(Q\) is a set of recommendation requests. The function \(\text{Precision}@k(q)\) represents a measure of fullness: a fraction of relevant goods in the first \(k\) positions of the list. \(\text{IdealAP}(q)\) is a maximum possible value \(\text{AP}(q)\) of average precision for a given request \(q\). This ideal value of \(\text{AP}(q)\) can be achieved when top positions of the recommendation list are filled with relevant recommendations.

\(\text{AP}\) is in a range of 0 to 1. The minimum value of \(\text{AP}=0\) means that there were no relevant SKUs in the top 30 items in the recommended list returned by service. The maximum value of \(\text{AP}=1\) means that all the top 30 values were relevant for the given request. The number of SKUs purchased by a client (the number of relevant SKUs) does not affect the weight of the request in the resulting metric.

Submissions Format

The grader expects algorithm code submitted as a ZIP archive. All solutions are evaluated in an isolated environment inside the docker container. There are resource and time limitations (see section “Technical limitations” below). In general, there is no need to be familiar with Docker to submit solutions.

At the top-level archive should have a file metadata.json with the following contents:

{"image": "datasouls/python","entry_point": "python generator.py"}

Where image is a field used to describe docker-image used as a base image to run your solution. The field entry_point provides a command that needs to be run to start a solution. From the point of view of the web service, its current directory is the root of the archive.

One can use the following docker images:

  • datasouls/python: Python3 with many popular libraries already installed
  • openjdk for submission using Java
  • Any other docker image accessible from DockerHub

Participants can use their own images with any required libraries and software installed. One would need to publish it on the DockerHub in order to use it in solution though.

Technical Limitations

Solutions would be using the following conditions:

  • Each solution has access to 8 Gb RAM and 4 vCPU;
  • Solutions have no access to the internet
  • Startup time is expected to be less than 5 seconds (after that your solution should respond with 200 OK at /ready path)
  • Solutions are expected to handle a load of 20 requests per second. All requests must be processed independently;
  • Each /recommend request should respond in less than 1 sec, 95% of requests should respond in 0.3 sec;
  • HTTP requests are expected to come from external machines, not only localhost;
  • Maximum submission size of solution’s archive is 1 Gb;
  • Maximum size of the Docker image used: 10 Gb.

During the test, all incoming requests will be sent in strict chronological order.

Cookies help us deliver our services. By using our services, you agree to our use of cookies.