Atoka API User Reference

DRAFT VERSION (version: 20190904.1043)



You can dig into several millions of company records and enrich your data with just a few lines of code, thanks to our easy to use API endpoints.

We currently offer these services:

  • Company Search, your starting point when you are trying to understand which are actually the companies you are looking for;
  • Company Details, when you already have identified a company you are interested in, and need to get fine grained details about it;
  • Company Match, when you you need to clean, verify or enrich information about a company you already have some data about.

The News Search service, is complementary to the above ones, and lets you understand what is happening around companies: it aggregates and analyses several thousands of daily news sources in your place by looking for company mentions and finding specific kinds of events.

The typical workflow would be to: first to use the search or the match features to build a list of IDs of companies you need more data about; then ask for the actual details of each one of them to the appropriate endpoint.

But the design of the API allows for more advanced use cases, like getting just a high level overview of records satisfying specific criteria (use fields=facets), or getting useful data straight off your searches (pass the packages parameter).


You can check the operational status of the API on the status dashboard.

All API requests must be made over HTTPS. Calls made over plain HTTP will fail.

API requests without authentication will also fail. The exposed endpoints are read only, thus you can call them either via GET or via POST requests. For parameters accepting multiple values, you can either:
  • follow the suggested practice to use the parameter just once, and separate different values using the comma (,) char;
  • or you can also pass the same parameter multiple times.

Just note that for values containing commas, you have to enclose each one in double quotes (for example: websitesContent="this, and that","something else").

Data in our API

Are you interested in discovering more details about the available data and where they come from? We prepared a page dedicated to the API Data that is perfect for you.


Through your authentication credentials, we can detect your subscription plan and retrieve the proper level of company details to show.

You can also use the data packages parameter to fine tune the kind of information you get back from API calls.

If you have special needs and want to further customize the results, we can adapt the output to your specific needs: just get in touch with us at

Do you already have a token?

You can check you token credits from

Some section of the api show different results depending also on the user that is doing the request. If your application has multiple users, you can manage them and their permissions through the dedicated Authentication Endpoints.
To properly authenticate the request, the API will require you to send the appUser parameter, that should contain the identifier (inside your application) of the user sending the request.

If your application has no user, you will be able to use all the services offered by the api without the "appUser" parameter.

  • token string required

    Use the authentication token you were provided.

  • appUser string

    Use the id for the user of your application

    other parameters
    • $app_id optional

      Old authetication solution. Must be used together with $app_key.

    • $app_key optional

      Old authentication solution. Must be used together with $app_id.

curl -G ""  -d "token=123"

Credit Billing

Each token has a set of credit buckets, each of them has a different type of credits and could have different duration.

Why different type of credits

The Atoka API offers you various endpoints and each of them provides you a collection of objects of different types, and each collection requires its own type of credit: for companies search/match/details you will need companies credits, for people endpoints you will need people credits etc..

Besides the first-citizen API collections, you will also see in your token credit status a long list of other credit buckets (ateco, admindiv..) that are used by the suggester endpoints.

What the duration is

Each credit bucket has a duration that defines after how much time, starting from the first usage, the credit will be reset. You can check when this will happen for each credit bucket from next reset on

For example, a credit bucket with 1 day duration implies that its content will refill 1 day after the first credit consumed in that bucket. There can also be credit buckets with infinite duration, and this means that they will never be refilled.

How to check credits status

You can check in real time the status of your token from

For each request submitted, you can check how many credits it consumed by reading the response header X-Uniapi-Cost, example:

"X-Uniapi-Cost": "units=90.0, companies=10.0, companies:*=10.0"

This means that the request consumed 10 companies credits and 10 companies:* credits.

Note: in this header you will also see unit credits, please ignore them since they are not considered in the token billing.

How many credits each request costs

1 object = 1 credit

The quantity of credits used by each request depends only on the number and type of objects you will get in the response. It doesn't matter if you passed 5 filters, if it was a search or a match request, you will be billed for the number of companies (or location, people ecc..) that you retrieved. Each collection has 3 sub-type of credits, <collection>, <collection>:* and <collection>:facet. Let’s see when we will use each one of them.

No Packages

If you perform a request with no package in the response, you will use only <collection> credits, one for each object in the response. We usually don't make API users pay for <collection> credits, this type of metric is mainly used for rate limiting on data and internal statistics.

Example: consumes 10 companies credits

X-Uniapi-Cost: companies=10.0

Data Packages

If you get at least one package in the response objects, this will cost you 1 <collection> credit + 1 <collection>:* credit for each object in the response.
To get packages in the response, you should add to the request the parameter packages. The number of packages obtained for each collection object is not important for the billing, what matters is if you got at least a package in the response of a company. This applies regardless of the endpoint, being it search, match, or details.

Example: consumes 10 companies credits and 10 companies:* credits

X-Uniapi-Cost: units=90.0, companies=10.0, companies:*=10.0

Example: consumes 10 companies credits and 10 companies:* credits

X-Uniapi-Cost: units=90.0, companies=10.0, companies:*=10.0

Example: consumes 1 companies credit and 1 companies:* credit

X-Uniapi-Cost: units=9.0, companies=1.0, companies:*=1.0

Example: consumes 1 locations credit and 1 locations:* credit

X-Uniapi-Cost: units=9.0, locations=1.0, locations:*=1.0


When you ask for facets (using the facetFields parameter), your token will be billed with 1 credit of type <collection>:facets for each facet field in the response.

Example: since this call will give us 2 facets in the response (email and phones), it consumes 2 credits for companies:facets (you can ignore the companies:*:facets count)

X-Uniapi-Cost: companies:*:facets=2.0, companies:facets=2.0

In the API Tutorials you can check out some examples on how to request only the information you need by using only the strictly necessary credits.

Rate limiting

There is a rate limit on calls to the APIs of 10 requests per second per token. If you exceed the rate limit your request will be denied with a status code of 429. The limit is implemented with a token bucket algorithm, with fill rate 10 req/s and bucket size 1500 to accomodate for bursts of calls.

Requests are counted at the HTTP level, so a batch call (see below) always counts as one. It is still preferable to avoid parallel batch calls, as they can consume a lot of API credits in a very short time.

Batch calls

This feature is currently not available to standard users.
If you need it, just get in touch with us at

It is possible to submit multiple calls to the same API endpoint with a single HTTP call. The trick is to "upload" either a JSON or a CSV file using the batch parameter. Such parameter is available on all the endpoints.

The API has some caveats that must be taken into account, though:

  • the data must be sent as a POST using multipart/form-data (in contrast with application/x-www-form-urlencoded, which will not work for batch requests);
  • the token parameter must be sent in the query string, as if it were a GET request; only the token must be sent this way, all the other parameters must be included in the body.
See the example on the right using the curl command for a concrete test.

Input formats

The file sent as batch parameter can either be a JSON or a CSV file. Either way, please make sure the encoding is utf-8.


The file must contain one JSON object per line; each item must contain a reqId field, which must be unique among all the requests in the file, and will be used in the output to identify the responses.
{"reqId": "r1", "name": "spaziodati"}
{"reqId": "r2", "regNumbers": "02241890223"}
{"reqId": "r3", "ids": "6da785b3adf2"}


The file header must list all the parameters that will be used for the batch requests, plus an extra column called reqId to identify each request; each line of the CSV file represents a different request.
r2,acme spa,

Common parameters

You can specify all the parameters in the batch file; if there are some parameters that are common for all the requests, to avoid repeating them many times you can specify them once in the body of the request. All such parameters will be added to those read from the batch file (in the curl example on the right, the packages parameter will be applied to all the queries found in the my_batch.csv file).

WARN: batch calls might consume many API calls; if your token is not allowed to retrieve all the information being generated, the response will not be partial: the whole output will be blocked, and an error will be returned. This should be taken into consideration when submitting big batch files.

Batch response

"meta": {
"count": int, // the number of processed batch requests.
"success": int, // the number of batch requests that succeeded.
"error": int // the number of batch requests that failed.
"responses": {
"<reqId:1>": { // the response of the first request
"meta": {
"count": int,
"limit": int
"items": [
"id": "string"
"<reqId:2>": { // the response of the second request
"meta": {
"count": int,
"limit": int
"items": [
"id": "string"
"<reqId:3>": { // the response of the third request
"meta": {
"count": int,
"limit": int
"items": [
"id": "string"
curl "" \
                      -F "packages=base,socials" \
                      -F batch=@my_batch.csv

Atoka CLI

Operating on a very large number of items through Atoka API requires some strategy to obtain results in the most efficient way. There are limitations on the size of requests and response, so for bulk operations we suggest to divide the input in chunks if possible and perform requests with a limited degree of parallelism.

We have developed a reference client implementation to help you with this kind of operations. At the moment, it supports downloading lists of people by their tax codes.

The client is available for the following platforms:

To understand how to use it, check out the documentation.
And if you would like to contribute, check out the source code.