import os import sys import csv import subprocess import time import random import re import simplejson as json import click from s2 import SemanticScholarAPI from util import * ''' s2 search API format: results matchedAuthors matchedPresentations query querySuggestions results stats totalPages totalResults ''' @click.command() @click.option('--index', '-n', default=0, help='Index of CSV (query,)') def fetch_entries(index): keys, lines = read_citation_list(index) citation_lookup = [] s2 = SemanticScholarAPI() for line in lines: key = line[0] name = line[1] title = line[2].strip() clean_title = re.sub(r'[^-0-9a-zA-Z ]+', '', line[2]) if len(clean_title) < 2: continue dump_fn = './datasets/s2/dumps/{}.json'.format(key) entry_fn = './datasets/s2/entries/{}.json'.format(key) result = None if os.path.exists(entry_fn): result = read_json(entry_fn) else: results = s2.search(clean_title) write_json(dump_fn, results) if len(results['results']) == 0: print("- {}".format(title)) else: print("+ {}".format(title)) result = results['results'][0] write_json(entry_fn, result) if result: paper_id = result['id'] paper = fetch_paper(s2, paper_id) citation_lookup.append([key, name, title, paper_id]) write_csv("datasets/citation_lookup.csv", keys=['key', 'name', 'title', 'paper_id'], rows=citation_lookup) def fetch_paper(s2, paper_id): os.makedirs('./datasets/s2/papers/{}/{}'.format(paper_id[0:2], paper_id), exist_ok=True) paper_fn = './datasets/s2/papers/{}/{}/paper.json'.format(paper_id[0:2], paper_id) if os.path.exists(paper_fn): return read_json(paper_fn) print(paper_id) paper = s2.paper(paper_id) if paper is None: print("Got none paper??") # time.sleep(random.randint(1, 2)) paper = s2.paper(paper_id) if paper is None: print("Paper not found") return None write_json(paper_fn, paper) # time.sleep(random.randint(1, 2)) return paper if __name__ == '__main__': fetch_entries()