Expert Identification

Expert Identification workflow consists of two steps. In the first step, the user extracts concepts from an abstract. The user can review and modify the list of extracted concepts and then feed it into the actual expert identification workflow.

The API Labs contain a notebook with examples related to the expert identification.

Step 1: Concept Extraction

Extracting concepts is implemented using the extract_concepts function:

    extract_concepts("document abstract")

This query will return a list of extracted concepts, ordered by weight, in descending order.

Step 2: Expert Identification

Extracted concepts, from step one, can be used in identify experts queries, for example:

    identify experts from concepts "+malaria OR \"effective malaria vaccine\" OR \"effective prevention\""
        using publications
        where research_org_countries is not empty
            and year >= 2013
        return experts[basics]
        limit 20 skip 0
        annotate organizational, coauthorship overlap
            with ["ur.016204724721.35", "ur.012127355561.32"]
    identify experts from concepts [<list of extracted concepts>] using <source> <where-filter>? return experts<[fields]> <paging>? <annotate?>

Experts identification can be executed on publications or grants sources. using publications may be omitted and publications source is used by default

It is possible to specify where but not required. Fields available for filtering are exactly the same as they are using standard search expression.

Similarly, Paginating Results is optional. It can be used to retrieve more reviewer candidates than twenty by default.

Annotating results with organizational and/or coauthorship conflicts will produce another object for each identified expert, with keys organizational and coauthorship where these will contain a boolean value if an organizational conflict with provided researchers is found, or a number if a coauthorship conflict is found.

Organizational conflict is defined as a boolean value that is true if the expert and the researchers from the query have the same current research organization.

Coauthorship conflict is defined as the number a of documents the expert has coauthored with any of the researchers provided in the query, in the last three years.

Returned experts are ordered by the relevance.


It is recommended to use two basic filters as shown in the example above, specifically research_org_countries is not empty and year >= 2013. Use of these filters will ensure maximum compatibility with expert identification workflow in the Dimensions UI. Additional filters may be used in combination with these to perform additional filtering.