git clone repo
npm install
to install all the node modulesnode app.js
to run the server- In a browser navigate to
localhost:3000
- Enter a HostName into the text box e.g a URL starting http://www.
- Click the button and wait for what seems like ages for your results.
SOME FIXES TO KNOWN ISSUES - modularized the crawler code into a package which means no global variables and concurrent searches can be handled.
- NO TESTS!!
- ECONNRESET errors with URL: www.gocardless.com - gocardless webserver is closing the 'request' connection. I think the problem is too many requests in a short space of time are being sent and the webserver doesn't like it. - no fix for this currently but I suspect using a Pool I could slow down the rate of requests.
- Timeout issue with slow internet connections If a site is large and the internet speed is slow then the client's request may timeout before the server can reply. If this happens the server will continue working until it replies but the reply will not be handled in the server.
- More than one search initiated.
- If a scrape is initiated and then the homepage refreshed and another scrape initiated then the first request will still carry on processing till completion in the server, but the reply will be ignored by the client. Really when the page refreshes a 'KILL Request' should be sent for any open connections.
- If a scrape is initiated and then another initated then the server will process both but only the last scrape will be displayed. - a hacky fix is: I have disabled the scrape button so a new search cannot be instigated.
- Very ugly UX but hey that's the way the dice fell!
- Use Javascript for client-side
- Use Node.JS for server side crawling code.
- Want to display a site map style image of all static assets used by the domain
- static assets should include: CSS files, Scripts/Libraries, and Images,[ Video, Audio] and Links (QUESTION: anything else?)
- I envisage 4 branches of the site map, one for each type of static asset with each path displayed under each branch.
StyleSheets | Scripts | Images/Video/Audio | Paths |
---|---|---|---|
/assets/../local.css | /assets/../app.js | /assets/../../brand.png | http://go-cardless.com/about |
http://../bootstrap.css | http://../../angular.min.js | http://..favicon*/.. | http://go-cardless.com/contact |
- Use a global namespace data store (an object) in server for caching scraped/crawled paths/static files
- Object or db would need to take the form:
var globalListCSS =
{
"path/to/css/file : true,
"another/path/css" : true,
};
var globalListScripts =
{
"path/to/script.js" : true,
};
var globalListImages =
{
"path/to/image.png" : true,
};
var globalListOfURLs =
{
"httP://www.gocardless.com" : true,
"http://www.gocardless.com/blog" :true,
};
- For scraping href's from the HTML I will use npm module
cheerio
. I will search for anchor 'a' tags and record any urls (canonical and relative paths) which are relevant the the starting search HostName and ignore any external URL's.
I discovered that using RegEx on HTML is a big 'No-NO'. - I will use npm module
request
to make GET request to URL's in the globalCacheOfURLs. - Using cheerio again I will look for CSS, Scripts and IMG files and store in their global lists.
- When a URL has been 'scraped' I will update it's property in the globalCacheofURLs to have the value 'true'
- When all properties in globalCacheOfURLs are true then the scraping is finished and we can render the results.
- Use a template (Jade) for server-side rendering of results
- Title and instructions telling user to enter a URL to see it's static assets structure
- Simple form with a text box and a button for submitting the URL
- Need to check URL validity of given URL
- Need to check the URL is a real URL using a regex
- If not well formed e.g. isn't something like: something.* then give an Alert to User
- If well formed URL string but we get a 401 then let the user know with an Alert
- If we get a 200 then carry on and display the results (if results have been server-side rendered
- Want to have a little animation of a sand timer or something while we wait for the results.
- Server gets a request from client and extracts the search URL
- Server makes a GET request on the URL; if it gets back a 200 we should have the HTML of the first page.
- The HTML needs to be searched for each of the static asset types and their paths will be added to the data store (if not dupes).
- Any http(s) links found that are for the Search URL (and are not CSS, Images/etc or Scripts) need to then be requested and the filtering process repeated.
- The following is the order I'm initially thinking the filtering should take:
- Search for all (local and external) stylesheets - not sure what the regexp/JSON search should be yet
- Search for all (local and external) scripts - not sure what the regexp search should be yet
- Search for all
<img>
tags (this will get local and external ones) or just .gif, .jpg, .png? QUESTION: Where else can Images be found ? What about videos? What about Audio?? - Search for anything beginning with http:// or https://
- this will bring a LOT of results and they will also include Scripts, StyleSheets and Images/Video/Audio
- Question : Can I find an exhaustive way of filtering out Scripts, CSS and Images/Video/Audio if I've already found them ?
- Any found links need to be checked for dupes in the data store and be discarded if duped.
- Any links that aren't duped need to be added to the data store
- Any links that don't have the host name searched for should be added to the data store (if not duped) and then skipped.
- Any links for the host name searched for should be added to the data store and then collated in a local object (Links) until all of the links have been filtered.
- The 'Links' need to be requested one by one, repeating the above steps.
- After all of the pages have been scraped (and we've left the request's response callback):
- we should check again for any dupes and remove them.
- sort each category of static asset
- use Jade templating to generate a site map
- Server gets a request from client and extracts the search URL
- Server makes a GET request on the URL; if it gets back a 200 we should have the HTML of the first page.
- The HTML needs to be searched for each of the static asset types and their paths will be added to the data store (if not dupes).
- Any http(s) links found that are for the Search URL (and are not CSS, Images/etc or Scripts) need to then be requested and the filtering process repeated.
- The following is the order I'm initially thinking the filtering should take:
- Search for all (local and external) stylesheets - not using regexp but using cheerio instead
- Search for all (local and external) scripts - not using regexp but using cheerio instead
- Search for all
<img>
tags (this will get local and external ones) or just .gif, .jpg, .png? QUESTION: Where else can Images be found ? What about videos? What about Audio?? - haven't had time to look into these questions!! - Search for other URL links - not using regexp but using cheerio to strip out anchor tags
- Any found links need to be checked for dupes in the data store and be discarded if duped.
- Any links that aren't duped need to be added to the data store
- Any links that don't have the host name searched for should be added to the data store (if not duped) and then skipped.
- Any links with the host name searched for in it's path should be added to the data store
- Any links added to the data store are entered with a 'false' value until they have been scraped.
- The 'Links' need to be requested one by one, repeating the above steps.
- After all of the pages have been scraped (and we've left the request's response callback):
- sort each category of static asset - haven't done this
- use Jade templating to generate a site map - I'm using a StyleSheet for sitemaps that I found.
- How to do the searching for the different asset types efficiently and exhaustively.
- Answer - Using Cheerio !
- Am I missing any types of static type? I have thought of CSS, Scripts (.js), Images, Videos, Audio and links
- Is using an object as a data store the way to go? I don't really want a global var. Maybe use some other caching or redis.
- Answer - There are definitely better ways to go about it I'm sure (like sylv(sp)'s node-crawler module which uses Pools.)
- I had a biiiig problem with Node.js's asynchronous nature with regards to doing GET requests for all of the URL's. It took me a while to work out how to establish when all of the URL's had been scraped and when to generate the sitemap.
- PROBLEM: The scraper works but is slow. I have really bad internet speed at the moment and sometimes there is a timeout and the sitemap doesn't get displayed.
- The way I have designed this, it is incredibly slow. Maybe the sitemap could be updated dynamically as it is scraped instead of when the entire site has been scraped.