From 110f3a34f1f36d0ea999d4aa34bbe66d5f2a01da Mon Sep 17 00:00:00 2001 From: Jules Laplace Date: Sun, 16 Dec 2018 15:02:59 +0100 Subject: skip empty, pull citations again --- scraper/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'scraper/README.md') diff --git a/scraper/README.md b/scraper/README.md index 318bba9a..4399abd3 100644 --- a/scraper/README.md +++ b/scraper/README.md @@ -70,9 +70,9 @@ Included in the content-script folder is a Chrome extension which scrapes Google Once you have the data from S2, you can scrape all the PDFs (and other URLs) you find, and then extract institutions from those and geocode them. -### s2-dump-pdf-urls.py +### s2-dump-db-pdf-urls.py -Dump PDF urls (and also IEEE urls etc) to CSV files. +Dump PDF urls (and also DOI urls etc) to CSV files. ### s2-fetch-pdf.py -- cgit v1.2.3-70-g09d2