Tuesday, 27 December 2016

How Data Mining is Useful to Companies?

How Data Mining is Useful to Companies?

Every business, organization and government bodies are collecting large amount of data for research and development. Such huge database can make them to have the information on hand when required. But most important is that it takes much time to find important information from the data. "If you want to grow rapidly, you must take quick and accurate decisions to grab timely available opportunities."

By applying the process of data mining, you can easily extract and filter required information from data. It is a processing of refining data and extracting important information. This process is mainly divided into 3 sections; pre-processing, mining and validation. In pre-processing, large amount of relevant data are collected. The mining section includes data classification, clustering, error correction and linking information. The last but important is validate without which you can not make trust on information. In short, data mining is a process of converting data into authentic information.

Let's have look on how data mining is useful to companies.

Fast and Feasible Decisions: To search information from huge bundle of data require more time. It also irritates a person who is doing such. With annoyed mind one can not take accurate decisions that's for sure. By having help of data mining, one can easily get information and make fast decisions. It also helps to compare information with various factors so the decisions become more reliable. Data mining is helpful in every decision to make it quick and feasible.

Powerful Strategies: After data mining, information becomes precise and easy to understand. While making strategies, one can easily analyze information in various dimensions. This analysis helps to get real idea about the strategy implementation. Management bodies can implement powerful strategies effectively to expand business boundaries.

Competitive Advantage: Information is easily available and precise so that one can compare it with competitors' information. It is very much required that you must compare the data otherwise you will have to suffer in business. After doing competitive analysis, one can make corrective decisions to go ahead from competitors. This way company can gain competitive advantage.

Your business can get all the benefits of data mining at cutting rates through outsourcing.

Source : http://ezinearticles.com/?How-Data-Mining-is-Useful-to-Companies?&id=2835042

Monday, 12 December 2016

Data Extraction - A Guideline to Use Scrapping Tools Effectively

Data Extraction - A Guideline to Use Scrapping Tools Effectively

So many people around the world do not have much knowledge about these scrapping tools. In their views, mining means extracting resources from the earth. In these internet technology days, the new mined resource is data. There are so many data mining software tools are available in the internet to extract specific data from the web. Every company in the world has been dealing with tons of data, managing and converting this data into a useful form is a real hectic work for them. If this right information is not available at the right time a company will lose valuable time to making strategic decisions on this accurate information.

This type of situation will break opportunities in the present competitive market. However, in these situations, the data extraction and data mining tools will help you to take the strategic decisions in right time to reach your goals in this competitive business. There are so many advantages with these tools that you can store customer information in a sequential manner, you can know the operations of your competitors, and also you can figure out your company performance. And it is a critical job to every company to have this information at fingertips when they need this information.

To survive in this competitive business world, this data extraction and data mining are critical in operations of the company. There is a powerful tool called Website scraper used in online digital mining. With this toll, you can filter the data in internet and retrieves the information for specific needs. This scrapping tool is used in various fields and types are numerous. Research, surveillance, and the harvesting of direct marketing leads is just a few ways the website scraper assists professionals in the workplace.

Screen scrapping tool is another tool which useful to extract the data from the web. This is much helpful when you work on the internet to mine data to your local hard disks. It provides a graphical interface allowing you to designate Universal Resource Locator, data elements to be extracted, and scripting logic to traverse pages and work with mined data. You can use this tool as periodical intervals. By using this tool, you can download the database in internet to you spread sheets. The important one in scrapping tools is Data mining software, it will extract the large amount of information from the web, and it will compare that date into a useful format. This tool is used in various sectors of business, especially, for those who are creating leads, budget establishing seeing the competitors charges and analysis the trends in online. With this tool, the information is gathered and immediately uses for your business needs.

Another best scrapping tool is e mailing scrapping tool, this tool crawls the public email addresses from various web sites. You can easily from a large mailing list with this tool. You can use these mailing lists to promote your product through online and proposals sending an offer for related business and many more to do. With this toll, you can find the targeted customers towards your product or potential business parents. This will allows you to expand your business in the online market.

There are so many well established and esteemed organizations are providing these features free of cost as the trial offer to customers. If you want permanent services, you need to pay nominal fees. You can download these services from their valuable web sites also.

Source:http://ezinearticles.com/?Data-Extraction---A-Guideline-to-Use-Scrapping-Tools-Effectively&id=3600918

Wednesday, 7 December 2016

Scraping in PDF Files - Improving Accessibility

Scraping in PDF Files - Improving Accessibility

Scraping of data is one procedure where mechanically information is sorted out that is contained on the Net in HTML, PDF and various other documents. It is also about collecting relevant data and saving it in spreadsheets or databases for retrieval purposes. On a majority of sites, text content can be easily accessed in the source code however a good number of business houses are making use of Portable Document Format. This format had been launched by Adobe and documents in this format can be easily viewed on almost any operating system. Some people convert documents from word to PDF when they need sending files over the Net and many convert PDF to word so that they could edit their documents. The best benefit that one gets for making use of it is that documents look a replica of the original and there is no form of disturbance in viewing them as they appear organized and same on almost all operating systems. The downside of the format is that text in such files is converted into a picture or image and then copying and pasting it is not possible any more.

Scraping in this format is a procedure where data is scraped that is available in such files. Most diverse of the tools is needed in order to carry out scraping in a document that is created in this format. You'd find two main forms of PDF files where one is built from a text file and the other firm is where it is built from some image. There is software brought by Adobe itself which can capably do scraping in text based files. For files that are image-based, there is a need to make use of special application for the task.

OCR program is one primary tool to be used for such a matter. Optical Recognition Program is capable in scanning documents for small picture that can be segregated into letters. The pictures are compared with actual letters and given they match well; the letters get copied into one file. These programs are able to do scraping in an apt way in image-based files pretty much aptly however it cannot be said that they are perfect. Once the procedure is done you could search through data so as to find those areas and parts which you had been looking for. More often than not it is difficult to find a utility that can obtain exact data that is needed without proper customization. But if thoroughly checked, you cou

Source: http://ezinearticles.com/?Scraping-in-PDF-Files---Improving-Accessibility&id=6108439

Friday, 2 December 2016

Web Data Extraction Services and Data Collection Form Website Pages

Web Data Extraction Services and Data Collection Form Website Pages

For any business market research and surveys plays crucial role in strategic decision making. Web scrapping and data extraction techniques help you find relevant information and data for your business or personal use. Most of the time professionals manually copy-paste data from web pages or download a whole website resulting in waste of time and efforts.

Instead, consider using web scraping techniques that crawls through thousands of website pages to extract specific information and simultaneously save this information into a database, CSV file, XML file or any other custom format for future reference.

Examples of web data extraction process include:
• Spider a government portal, extracting names of citizens for a survey
• Crawl competitor websites for product pricing and feature data
• Use web scraping to download images from a stock photography site for website design

Automated Data Collection
Web scraping also allows you to monitor website data changes over stipulated period and collect these data on a scheduled basis automatically. Automated data collection helps you discover market trends, determine user behavior and predict how data will change in near future.

Examples of automated data collection include:
• Monitor price information for select stocks on hourly basis
• Collect mortgage rates from various financial firms on daily basis
• Check whether reports on constant basis as and when required

Using web data extraction services you can mine any data related to your business objective, download them into a spreadsheet so that they can be analyzed and compared with ease.

In this way you get accurate and quicker results saving hundreds of man-hours and money!

With web data extraction services you can easily fetch product pricing information, sales leads, mailing database, competitors data, profile data and many more on a consistent basis.

Source:http://ezinearticles.com/?Web-Data-Extraction-Services-and-Data-Collection-Form-Website-Pages&id=4860417

Friday, 18 November 2016

How Xpath Plays Vital Role In Web Scraping

How Xpath Plays Vital Role In Web Scraping

XPath is a language for finding information in structured documents like XML or HTML. You can say that XPath is (sort of) SQL for XML or HTML files. XPath is used to navigate through elements and attributes in an XML or HTML document.

To understand XPath we must be clear about elements and nodes which are the building blocks of XML and HTML. Let’s talk about them. Here is an example element in an HTML document:

   <a class=”hyperlink” href=http://www.google.com>google</a>

Copy the above text to a file, name it as sample.html and open it in a browser. This will end up as a text link displaying the words “google” and it will take you to www.google.com. For each element there are three main parts: The type, the attributes, andthe text. They are listed below:

 a                                 Type
class,  href                Attributes
google                       Text

Let’s grab some XPath developer tools. I am on Firebug for Firefox or you can use Chrome’s developer tools. We will now form some XPath expressions to extract data from the above element. We will also verify the XPath by using Firebug Console.

For extracting the text “google”:

   //a[@href]/text()   

   //a[@class=”hyperlink”]/text()
 
For extracting the hyperlink i.e. ”www.google.com” :

   //a/@href
//a[@class=”hyperlink”]/@href

That’s all with a single element but in reality, you need to deal with more complex forms.

Let’s proceed to the idea of nodes, and its familial relationship of HTML elements. Look at this example code:

 <div title=”Section1″>

   <table id=”Search”>

       <tr class=”Yahoo”>Yahoo Search</tr>

       <tr class=”Google”>Google Search</tr>

   </table>

</div>

 Notice the </div> at the bottom? That means the table and tr elements are contained within the div. These other elements are considered descendants of the div. The table is a child, and the tr is a grandchild (and so on and so forth). The two tr elements are considered siblings each other. This is vital, as XPath uses these relationships to find your element.

So suppose you want to find the Google item. Any of the following expressions will work:

   //tr[@class=’Google’]
   //div/table/tr[2]
  //div[@title=”Section1″]//tr

So let’s analyze the expressions. We start at the top element (also known as a node). The // means to search all descendants, / means to just look at the current element’s children. So //div means look through all descendants for a div element. The brackets [] specify something about that element. So we can look for an attribute with the @ symbol, or look for text with the text() function. We can chain as many of these together as we can.

Here is a quick reference:

   //             Search all descendant elements
   /              Search all child elements
   []             The predicate (specifies something about the element you are looking for)
   @           Specifies an element attribute. (For example, @title)
   
   .               Specifies the current node (useful when you want to look for an element’s children in the predicate)
   ..              Specifies the parent node
  text()       Gets the text of the element.
   
In the context of web scraping, XPath is a nice tool to have in your belt, as it allows you to write specifications of document locations more flexibly than CSS selectors.

Please subscribe to our blog to get notified when we publish the next blog post.

Source: http://blog.datahut.co/how-xpath-plays-vital-role-in-web-scraping/

Friday, 28 October 2016

Tapping The Mining Services Goldmine

Tapping The Mining Services Goldmine

In Australia, resources booms tend to come and go. In a recent speech, Reserve Bank Deputy Governor Ric Battellino identified five major booms over the last two hundred years - from the gold rush of the 1850s, to our current minerals and energy boom.

Many have argued that the current boom is different from anything we've experienced before, with the modernisation of the Chinese and Indian economies likely to keep demand high for decades. That's led some analysts to talk of a resources supercycle. And yet a supercycle is still a cycle.

By definition, cycles are uneven, with commodity prices ebbing and flowing in response to demand, economic conditions and market sentiment. And the share prices of resources companies tend to move with them.

Which raises the question: what's the best way for investors to tap into the potential of the mining boom, without the heart-stopping volatility that mining stocks sometimes deliver?
Invest in the store that sells the spade

Legend has it that the people who really profited from Australia's gold rush weren't the miners who flocked to the fields, but the store-owners who sold them their spades and pans. You can put the same principle to work today by investing in mining services and engineering companies.

Here are five reasons to consider giving mining services companies a place in your portfolio:

1. Growing demand

In November, the Australian Bureau of Agricultural and Resource Economics reported that mining and energy companies plan to invest a record $132.9bn in new projects, a 58% increase from the previous year. That includes 72 projects at an advanced stage of development, such as the $43bn Gorgon LNG project and the $20bn Olympic dam expansion. The mining services sector is poised to benefit from all of them.

The sector also stands to benefit from Australia's worsening skills shortage, with more companies looking to contractors to provide essential services in remote locations.

2. Less volatility

Resource stocks tend to fluctuate with commodity prices, which are subject to international economic forces and market sentiment beyond the control of any individual company. As a result, they are among the most volatile companies on the Australian sharemarket. But mining services stocks, while still exposed to the commodities cycle, tend to be more stable.

3. More predictable cash flow

One reason for the comparative volatility of commodity companies is that their cash flow can be very variable. In the development phase, they need to make significant capital expenditure, often leading to negative cash flows. And while they enjoy healthy revenues in the production phase, that revenue may diminish as a resource is exhausted, unless they make further investments in exploration and development.
In contrast, mining services companies require comparatively little capital investment, with more predictable cash flows over the long-term.

4. Higher dividends

Predictable cash flows and lower capital expenditures often allow services companies to pay out more of their earnings as dividends, making them more appealing for income-oriented investors.

5. No need to pick winners

Many miners are highly leveraged to demand for a single commodity, whether it's gold, coal, copper or iron ore. Some are reliant on a single mine or field. Whereas services companies generally have a more diversified customer base.

Source: http://ezinearticles.com/?Tapping-The-Mining-Services-Goldmine&id=5924837

Sunday, 16 October 2016

How Web Scraping Affects your Revenue Growth

How Web Scraping Affects your Revenue Growth

Web scraping is an indispensable resource when it comes to gaining an edge in the competition with the help of business intelligence. As more and more data gets created on the world wide web, the complexity of extracting it intensifies. Web scraping is a technology that demands an extensive tech stack, high end resources and technically skilled labour. Given this resource hungry nature, many businesses prefer outsourcing it to doing the scraping in-house. Here is a brief walk-through of web scraping so that you can get a grip on the whole process and understand how it could affect your revenue growth as a business.

Business intelligence

The competition among online businesses is at its peak. This has more to do with the ready availability of insightful data. When data acquisition at this scale wasn’t possible in the past, businesses made hit-or-miss decisions upon instincts. Now that every activity can be recorded, extracted as data and analysed to arrive at the best business decisions, companies are making the most of it to boost their revenue. This includes monitoring the activity of competitors on social media, price intelligence, sentiment analysis, gathering data for market research and much more. The use cases of web scraping in business is almost infinite. Business intelligence is extremely helpful for the survival of companies in a market that fluctuates often. Implementing a business intelligence strategy powered by web scraping can definitely give a boost to your revenue growth.
Cost centres involved in in-house Web Scraping

Web scraping, despite being a robust solution for extracting data from the web, is not going to be an easy path if your company is not technically rich already. It involves setting up resources like a tech stack and servers that can run the web crawler by a technically skilled team. Following are the primary cost centres involved in the web scraping process.

1. High end servers

Web scraping is a resource intensive process. Considering the importance of uptime here, the crawlers cannot be run on average performance machines. To have the optimum uptime and avoid crashes, the crawler has to be run on high performing servers located in different parts of the world. The quality of servers is crucial to the consistency of the process. Not to mention, these high end servers makeup for a significant amount of the cost involved in web scraping.

2. Technically skilled labour

Scanning through the source code to identify appropriate tags that hold the required data points and creating a program that can automatically fetch these data points from similar pages’ at large scale requires deep programming skills. It goes without saying that employing skilled people would incur cost that could take a hit on your revenue. Ideally, you will need a team of at least 10 to run a web scraping setup in-house.       

3. An extensive tech stack

Although most of the software being used for web scraping are open source, you will find yourself investing in paid software to make certain things easier or faster. Dealing with open source software might not be as user friendly as the paid ones. In any case, having a tech stack with a lot of options is a necessary aspect of web scraping that would incur additional cost.   

4. Maintenance

Building and running the web scraping setup is only half of the story. Since websites undergo changes often, there is a possibility of the crawler setup breaking from time to time. To avoid or solve this at the earliest, a monitoring system that involves both machines and humans is necessary. Monitoring and maintenance contribute to a considerable cost in the web scraping process.
Data as a service

If data for business is your requirement, a better way to acquire it would be to depend on a company that can deliver it via the data as a service route. Web scraping companies have already set up high-end resources required to run the web crawlers that you can utilize to avail web scraping at a much lower cost than what you would incur by doing it on your own. With this, you can also save yourself from the complications and maintenance headache associated with web scraping. Moreover, with a web scraping service, you can enjoy a much higher return on investment owing to the lowered cost of data acquisition. You can use our ROI calculator to compare between the cost of going with an in-house web scraping setup and a hosted solution.

Source: https://www.promptcloud.com/blog/web-scraping-affects-revenue-growth

Monday, 3 October 2016

An Easy Way For Data Extraction

There are so many data scraping tools are available in internet. With these tools you can you download large amount of data without any stress. From the past decade, the internet revolution has made the entire world as an information center. You can obtain any type of information from the internet. However, if you want any particular information on one task, you need search more websites. If you are interested in download all the information from the websites, you need to copy the information and pate in your documents. It seems a little bit hectic work for everyone. With these scraping tools, you can save your time, money and it reduces manual work.

The Web data extraction tool will extract the data from the HTML pages of the different websites and compares the data. Every day, there are so many websites are hosting in internet. It is not possible to see all the websites in a single day. With these data mining tool, you are able to view all the web pages in internet. If you are using a wide range of applications, these scraping tools are very much useful to you.

The data extraction software tool is used to compare the structured data in internet. There are so many search engines in internet will help you to find a website on a particular issue. The data in different sites is appears in different styles. This scraping expert will help you to compare the date in different site and structures the data for records.

And the web crawler software tool is used to index the web pages in the internet; it will move the data from internet to your hard disk. With this work, you can browse the internet much faster when connected. And the important use of this tool is if you are trying to download the data from internet in off peak hours. It will take a lot of time to download. However, with this tool you can download any data from internet at fast rate.There is another tool for business person is called email extractor. With this toll, you can easily target the customers email addresses. You can send advertisement for your product to the targeted customers at any time. This the best tool to find the database of the customers.

However, there are some more scraping tolls are available in internet. And also some of esteemed websites are providing the information about these tools. You download these tools by paying a nominal amount.

Source: http://ezinearticles.com/?An-Easy-Way-For-Data-Extraction&id=3517104

Tuesday, 27 September 2016

How to use Web Content Extractor(WCE) as Email Scraper?

How to use Web Content Extractor(WCE) as Email Scraper?

Web Content Extractor is a great web scraping software developed by Newprosoft Team. The software has easy to use project wizard to create a scraping configuration and scrape data from websites.

One day I came to see the Visual Email Extractor which is also product of Newprosoft and similar to Web Content Extractor but it’s primary use is to scrape email addresses by crawling websites you feed to the scraper. I had noticed that with the little modification in Web Content Extractor project configuration you can use it same as Visual Email Extractor to extract email addresses.

In this post I will show you what configuration makes the Web Content Extractor to extract email addresses. I still recommend Visual Email Extractor as it has lot more features then extracting email using WCE.

Here are the configuration that makes WCE to Extract Emails.

Step 1 : Open Web Content Extractor and Create New Project and Click on Next.

Step 2:  Under Crawling Rules -> Advanced Rules Tab do the following settings

Crawling Level 1 Settings

Follow Links if link text equals:
*contact*; *feedback*; *support*; *about*

for 'Follow Links if link text equals' text box enter following values:
contact; feedback; support; about

for 'Do not Follow links if URL contains' text box enter following values:

google.; yahoo.; bing; msn.; altavista.; myspace.com; youtube.com; googleusercontent.com; =http; .jpg; .gif; .png; .bmp; .exe; .zip; .pdf;

Set 'Maximum Crawling Deapth' to 2

set 'Crawling Order' to Deapth First Crawling

Tick mark below below check boxes:

->Follow all internal links

  Crawling Level 2  Settings

set 'Follow links if link text equals' to below value

*contact*; *feedback*; *support*; *about*

set 'Follow links if url contains' text box to below value

contact; feedback; support; about

set 'DO NOT follow links if url contains' text box to below value

=http

Step 3 After doing above settings now click on Next  -> in Extraction Pattern window -> Click on Define ->  in Web Page Address (URL) give any URL where email is given.  and click on  + sign right of Date Fields to define scraping pattern.

Now inside HTML Structure selects HTML check box or Body check box which means for each page it will take whole page content to parse data.

Now last settings to extract emails from page using regular expression based email extraction function.  Open Predefined Script window and select ‘Extract_Email_Addresses‘ and click on OK. and if you have used page that contains email then in Script Result’ you will be able to see the harvested email.

Hope this will help you to use your Web Content Extractor as a Email Scraper.. Share your view in comment.

Source: http://webdata-scraping.com/use-web-content-extractor-as-email-scraper/

Thursday, 15 September 2016

Things to take care while doing Web Scraping!!!

Things to take care while doing Web Scraping!!!

In the present day and age, web scraping word becomes most popular in data science. Basically web scraping is extracting the information from the websites using pre-written programs and web scraping scripts. Many organizations have successfully used web site scraping to build relevant and useful database that they use on a daily basis to enhance their business interests. This is the age of the Big Data and web scraping is one of the trending techniques in the data science.

Throughout my journey of learning web scraping and implementing many successful scraping projects, I have come across some great experiences we can learn from.  In this post, I’m going to discuss some of the approaches to take and approaches to avoid while executing web scraping.

User Proxies: Anonymously scraping data from websites

One should not scrape website with a single IP Address. Because when you repeatedly request the web page for web scraping, there is a chance that the remote web server might block your IP address preventing further request to the web page. To overcome this situation, one should scrape websites with the help of proxy servers (anonymous scraping). This will minimize the risk of getting trapped and blacklisted by a website. Use of Proxies to hide your identity (network details) to remote web servers while scraping data. You may also use a VPN instead of proxies to anonymously scrape websites.

Take maximum data and store it.

Do not follow “process the web page as it comes from the remote server”. Instead take all the information and store it to disk. This approach will be useful when your scraping algorithm breaks in the middle. In this case you don’t have to start scraping again. Never download the same content more than once as you are just wasting bandwidth. Try and download all content to disk in one go and then do the processing.

Follow strict rules in parsing:

Check various rules while parsing the information from the web site. For example if you expect a value to be a date then check that it’s really a date. This may greatly improve the quality of information. When you get unexpected data, then the algorithm need to be changed accordingly.

Respect Robots.txt

Robots.txt specifies the set of rules that should be followed by web crawlers and robots. I strongly advise you to consider and adjust your crawler to fully respect robots.txt. Robots.txt contains instructions on the exact pages that you are allowed to crawl, user-agent, and the requisite intervals between page requests. Following to these instructions minimizes the chance of getting blacklisted and banned from website owner.

Use XPath Smartly

XPath is a nice option to select elements of the HTML document more flexibly than CSS Selectors.  Be careful about HTML structure change through page to page so one xpath you made may be failed to extract data on another page due to changes in HTML structure.

Obey Website TOC:

Some websites make it absolutely apparent in their terms and conditions that they are particularly against to web scraping activities on their content. This can make you vulnerable against possible ethical and legal implications.

Test sample scrape and verify the data with actual scrape

Once you are done with web scraping project set up, you need to test it for sometimes. Check the extracted data. If something is not good, find out the cause and make changes accordingly and finally come to a perfect web scraping project.

Source: http://webdata-scraping.com/things-take-care-web-scraping/

Tuesday, 6 September 2016

Benefits of Ruby over Python & R for Web Scraping

Benefits of Ruby over Python & R for Web Scraping

In this data driven world, you need to be constantly vigilant, as information and key data for an organization keeps changing all the while. If you get the right data at the right time in an efficient manner, you can stay ahead of competition. Hence, web scraping is an essential way of getting the right data. This data is crucial for many organizations, and scraping technique will help them keep an eye on the data and get the information that will benefit them further.

Web scraping involves both crawling the web for data and extracting the data from the page. There are several languages which programmers prefer for web scraping, the top ones are Ruby, Python & R. Each language has its own pros and cons over the other, but if you want the best results and a smooth flow, Ruby is what you should be looking for.

Ruby is very good at production deployments and using Ruby, Redis & Chef have proven to be a great combination. String manipulation in Ruby is very easy because it is based on Perl syntax. Also, Ruby is great for analyzing web pages using  one of the very powerful gems called Nokogiri. Nokogiri is much easier to use as compared to other packages and libraries used by R and Python respectively. Nokogiri can deal with broken HTML / HTML fragments easily. Ruby also has many extensions, such as Sanitize and Loofah, that can help clean up broken HTML.

Python programmers widely use a library called Beautiful Soup for pulling data out of HTML & XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work. R programmers have a new package called rvest that makes it easy to scrape data from html web pages, by libraries like beautiful soup. It is designed to work with magrittr so that you can express complex operations as elegant pipelines composed of simple, easily understood pieces.

To help you understand it more effectively, below is a comprehensive infographic for the same.

Ruby is far ahead of Python & R for cloud development and deployments.  The Ruby Bundler system is just great for managing and deploying packages from Github. Using Chef, you can start up and tear down nodes on EC2, at will, and monitor for failures,  scale up or down, reset your IP addresses, etc. Ruby also has great testing frameworks like Fakeweb and Capybara, making it almost trivial to build a great suite of unit tests and to include advanced features, like crawling  and scraping using webkit / selenium. 

The only disadvantage to Ruby is lack of machine learning and NLP toolkits, making it much harder to emulate the capacity of a tool like Pattern.  It can still be done, however, since most of the heavy lifting can be done asynchronously using Unix tools like liblinear or vowpal wabbit.

Conclusion

Each language has its plus point and you can pick the one which you are most comfortable with. But if you are looking for smooth web scraping experience, then Ruby is the best option. That has been our choice too for years at PromptCloud for the best web scraping results. If you have any further questions about this, then feel free to get in touch with us.

Source: https://www.promptcloud.com/blog/benefits-of-ruby-for-web-scraping

Monday, 29 August 2016

How Web Scraping can Help you Detect Weak spots in your Business

How Web Scraping can Help you Detect Weak spots in your Business

Business intelligence is not a new term. Businesses have always been employing experts for analysing the progress, market and industry trends to keep their growth graph going up. Now that we have big data and the tool to gather this data – Web scraping, business intelligence has become even more fruitful. In fact, business intelligence has become a necessary thing to survive now that the competition is fierce in every industry. This is the reason why most enterprises depend on web scraping solutions to gather the data relevant to their businesses. This data is highly insightful and dependable enough to make critical business decisions. Business intelligence from web scraping is definitely a game changer for companies as it can supply relevant and actionable data with minimal effort.

Most businesses have weak spots that are being overlooked or hidden from the plain sight. These weak spots, if left unnoticed can gradually result in the downfall of your company. Here is how you can use data acquired through web scraping to detect weak spots in your business and strengthen them.

Competitor analysis

Many a times, you can find out the flaws in your business by keeping a close watch on your competitors. Competitor analysis is something that we owe to web scraping as the level of competitive intelligence that you can derive from web scraping has never been achievable in the past. With crawling forums and social media sites where your target audience is, you can easily find out if your competitor is leveraging something you have overlooked. Competitor analysis is all about staying updated to each and every action by your competitors, so that you can always be prepared for their next strategic move. If your competitors are doing better than you, this data can be used to make a comparison between your business and theirs which would give you insights on where you lack.

Brand monitoring on Social media

With social media platforms acting like platforms where businesses and customers can interact with each other, the data available on these sites are increasingly becoming relevant to businesses. Any issues in your business operations will also reflect on your customer sentiments. Social media is a goldmine of sentiment data that can help you detect issues within your company. By analysing the posts that mention your brand or product on social media sites, you can identify what department of your company is functioning well and what isn’t.

For example, if you are an Ecommerce portal and many users are complaining about delivery issues from your company on social media, you might want to switch to a better logistics partner who does a better job. The ability to identify such issues at the earliest is extremely important and that’s where web scraping becomes a life saver. With social media scraping, monitoring your brand on social media is easy like never before and the chances of minor issues escalating to bigger ones is almost non-existent. Brand monitoring is extremely crucial if you are a business operating in the online space. Social media scraping solutions are provided by many leading web scraping companies, which totally eliminates the technical complications associated with the process for you.

Finding untapped opportunities

There are always new and untapped markets and opportunities that are relevant to your business. Finding them is not going to be an easy task with manual and outdated methods of research. Web scraping can fill this gap and help you find opportunities that your company can make use of to leverage your reach and progress. Sometimes, targeting the right audience makes all the difference that you’ve been trying to make. By using web crawling to find mentions of your relevant keywords on the web, you can easily stay updated on your niche and fill in to any new untapped markets. Web crawling for keywords is better explained in our previous blog.

Bottom line

It is not a cakewalk to stay ahead in the competition considering how competitive every industry has become in this digital age. It is crucial to find the weak spots and untapped opportunities of your business before someone else does. Of course, you can always use some help from the technology when you need it. Web scraping is clearly the best way to find and gather data that would help you figure these out. With web crawling solutions that can completely take care of this niche process, nothing is stopping you from using the data and insights that the web has in stock for your business.

Source: https://www.promptcloud.com/blog/web-scraping-detect-weak-spots-business

Wednesday, 17 August 2016

Three Common Methods For Web Data Extraction

Three Common Methods For Web Data Extraction

Probably the most common technique used traditionally to extract data from web pages this is to cook up some regular expressions that match the pieces you want (e.g., URL's and link titles). Our screen-scraper software actually started out as an application written in Perl for this very reason. In addition to regular expressions, you might also use some code written in something like Java or Active Server Pages to parse out larger chunks of text. Using raw regular expressions to pull out the data can be a little intimidating to the uninitiated, and can get a bit messy when a script contains a lot of them. At the same time, if you're already familiar with regular expressions, and your scraping project is relatively small, they can be a great solution.

Other techniques for getting the data out can get very sophisticated as algorithms that make use of artificial intelligence and such are applied to the page. Some programs will actually analyze the semantic content of an HTML page, then intelligently pull out the pieces that are of interest. Still other approaches deal with developing "ontologies", or hierarchical vocabularies intended to represent the content domain.

There are a number of companies (including our own) that offer commercial applications specifically intended to do screen-scraping. The applications vary quite a bit, but for medium to large-sized projects they're often a good solution. Each one will have its own learning curve, so you should plan on taking time to learn the ins and outs of a new application. Especially if you plan on doing a fair amount of screen-scraping it's probably a good idea to at least shop around for a screen-scraping application, as it will likely save you time and money in the long run.

So what's the best approach to data extraction? It really depends on what your needs are, and what resources you have at your disposal. Here are some of the pros and cons of the various approaches, as well as suggestions on when you might use each one:

Raw regular expressions and code

Advantages:

- If you're already familiar with regular expressions and at least one programming language, this can be a quick solution.

- Regular expressions allow for a fair amount of "fuzziness" in the matching such that minor changes to the content won't break them.

- You likely don't need to learn any new languages or tools (again, assuming you're already familiar with regular expressions and a programming language).

- Regular expressions are supported in almost all modern programming languages. Heck, even VBScript has a regular expression engine. It's also nice because the various regular expression implementations don't vary too significantly in their syntax.

Disadvantages:

- They can be complex for those that don't have a lot of experience with them. Learning regular expressions isn't like going from Perl to Java. It's more like going from Perl to XSLT, where you have to wrap your mind around a completely different way of viewing the problem.

- They're often confusing to analyze. Take a look through some of the regular expressions people have created to match something as simple as an email address and you'll see what I mean.

- If the content you're trying to match changes (e.g., they change the web page by adding a new "font" tag) you'll likely need to update your regular expressions to account for the change.

- The data discovery portion of the process (traversing various web pages to get to the page containing the data you want) will still need to be handled, and can get fairly complex if you need to deal with cookies and such.

When to use this approach: You'll most likely use straight regular expressions in screen-scraping when you have a small job you want to get done quickly. Especially if you already know regular expressions, there's no sense in getting into other tools if all you need to do is pull some news headlines off of a site.

Ontologies and artificial intelligence

Advantages:

- You create it once and it can more or less extract the data from any page within the content domain you're targeting.

- The data model is generally built in. For example, if you're extracting data about cars from web sites the extraction engine already knows what the make, model, and price are, so it can easily map them to existing data structures (e.g., insert the data into the correct locations in your database).

- There is relatively little long-term maintenance required. As web sites change you likely will need to do very little to your extraction engine in order to account for the changes.

Disadvantages:

- It's relatively complex to create and work with such an engine. The level of expertise required to even understand an extraction engine that uses artificial intelligence and ontologies is much higher than what is required to deal with regular expressions.

- These types of engines are expensive to build. There are commercial offerings that will give you the basis for doing this type of data extraction, but you still need to configure them to work with the specific content domain you're targeting.

- You still have to deal with the data discovery portion of the process, which may not fit as well with this approach (meaning you may have to create an entirely separate engine to handle data discovery). Data discovery is the process of crawling web sites such that you arrive at the pages where you want to extract data.

When to use this approach: Typically you'll only get into ontologies and artificial intelligence when you're planning on extracting information from a very large number of sources. It also makes sense to do this when the data you're trying to extract is in a very unstructured format (e.g., newspaper classified ads). In cases where the data is very structured (meaning there are clear labels identifying the various data fields), it may make more sense to go with regular expressions or a screen-scraping application.

Screen-scraping software

Advantages:

- Abstracts most of the complicated stuff away. You can do some pretty sophisticated things in most screen-scraping applications without knowing anything about regular expressions, HTTP, or cookies.

- Dramatically reduces the amount of time required to set up a site to be scraped. Once you learn a particular screen-scraping application the amount of time it requires to scrape sites vs. other methods is significantly lowered.

- Support from a commercial company. If you run into trouble while using a commercial screen-scraping application, chances are there are support forums and help lines where you can get assistance.

Disadvantages:

- The learning curve. Each screen-scraping application has its own way of going about things. This may imply learning a new scripting language in addition to familiarizing yourself with how the core application works.

- A potential cost. Most ready-to-go screen-scraping applications are commercial, so you'll likely be paying in dollars as well as time for this solution.

- A proprietary approach. Any time you use a proprietary application to solve a computing problem (and proprietary is obviously a matter of degree) you're locking yourself into using that approach. This may or may not be a big deal, but you should at least consider how well the application you're using will integrate with other software applications you currently have. For example, once the screen-scraping application has extracted the data how easy is it for you to get to that data from your own code?

When to use this approach: Screen-scraping applications vary widely in their ease-of-use, price, and suitability to tackle a broad range of scenarios. Chances are, though, that if you don't mind paying a bit, you can save yourself a significant amount of time by using one. If you're doing a quick scrape of a single page you can use just about any language with regular expressions. If you want to extract data from hundreds of web sites that are all formatted differently you're probably better off investing in a complex system that uses ontologies and/or artificial intelligence. For just about everything else, though, you may want to consider investing in an application specifically designed for screen-scraping.

As an aside, I thought I should also mention a recent project we've been involved with that has actually required a hybrid approach of two of the aforementioned methods. We're currently working on a project that deals with extracting newspaper classified ads. The data in classifieds is about as unstructured as you can get. For example, in a real estate ad the term "number of bedrooms" can be written about 25 different ways. The data extraction portion of the process is one that lends itself well to an ontologies-based approach, which is what we've done. However, we still had to handle the data discovery portion. We decided to use screen-scraper for that, and it's handling it just great. The basic process is that screen-scraper traverses the various pages of the site, pulling out raw chunks of data that constitute the classified ads. These ads then get passed to code we've written that uses ontologies in order to extract out the individual pieces we're after. Once the data has been extracted we then insert it into a database.

Source: http://ezinearticles.com/?Three-Common-Methods-For-Web-Data-Extraction&id=165416

Monday, 8 August 2016

Getting Data from the Web

Getting Data from the Web

You’ve tried everything else, and you haven’t managed to get your hands on the data you want. You’ve found the data on the web, but, alas — no download options are available and copy-paste has failed you. Fear not, there may still be a way to get the data out. For example you can:

Get data from web-based APIs, such as interfaces provided by online databases and many modern web applications (including Twitter, Facebook and many others). This is a fantastic way to access government or commercial data, as well as data from social media sites.

Extract data from PDFs. This is very difficult, as PDF is a language for printers and does not retain much information on the structure of the data that is displayed within a document. Extracting information from PDFs is beyond the scope of this book, but there are some tools and tutorials that may help you do it.

Screen scrape web sites. During screen scraping, you’re extracting structured content from a normal web page with the help of a scraping utility or by writing a small piece of code. While this method is very powerful and can be used in many places, it requires a bit of understanding about how the web works.

With all those great technical options, don’t forget the simple options: often it is worth to spend some time searching for a file with machine-readable data or to call the institution which is holding the data you want.

In this chapter we walk through a very basic example of scraping data from an HTML web page.
What is machine-readable data?

The goal for most of these methods is to get access to machine-readable data. Machine readable data is created for processing by a computer, instead of the presentation to a human user. The structure of such data relates to contained information, and not the way it is displayed eventually. Examples of easily machine-readable formats include CSV, XML, JSON and Excel files, while formats like Word documents, HTML pages and PDF files are more concerned with the visual layout of the information. PDF for example is a language which talks directly to your printer, it’s concerned with position of lines and dots on a page, rather than distinguishable characters.
Scraping web sites: what for?

Everyone has done this: you go to a web site, see an interesting table and try to copy it over to Excel so you can add some numbers up or store it for later. Yet this often does not really work, or the information you want is spread across a large number of web sites. Copying by hand can quickly become very tedious, so it makes sense to use a bit of code to do it.

The advantage of scraping is that you can do it with virtually any web site — from weather forecasts to government spending, even if that site does not have an API for raw data access.
What you can and cannot scrape

There are, of course, limits to what can be scraped. Some factors that make it harder to scrape a site include:

Badly formatted HTML code with little or no structural information e.g. older government websites.

Authentication systems that are supposed to prevent automatic access e.g. CAPTCHA codes and paywalls.

Session-based systems that use browser cookies to keep track of what the user has been doing.

A lack of complete item listings and possibilities for wildcard search.

Blocking of bulk access by the server administrators.

Another set of limitations are legal barriers: some countries recognize database rights, which may limit your right to re-use information that has been published online. Sometimes, you can choose to ignore the license and do it anyway — depending on your jurisdiction, you may have special rights as a journalist. Scraping freely available Government data should be fine, but you may wish to double check before you publish. Commercial organizations — and certain NGOs — react with less tolerance and may try to claim that you’re “sabotaging” their systems. Other information may infringe the privacy of individuals and thereby violate data privacy laws or professional ethics.
Tools that help you scrape

There are many programs that can be used to extract bulk information from a web site, including browser extensions and some web services. Depending on your browser, tools like Readability (which helps extract text from a page) or DownThemAll (which allows you to download many files at once) will help you automate some tedious tasks, while Chrome’s Scraper extension was explicitly built to extract tables from web sites. Developer extensions like FireBug (for Firefox, the same thing is already included in Chrome, Safari and IE) let you track exactly how a web site is structured and what communications happen between your browser and the server.

ScraperWiki is a web site that allows you to code scrapers in a number of different programming languages, including Python, Ruby and PHP. If you want to get started with scraping without the hassle of setting up a programming environment on your computer, this is the way to go. Other web services, such as Google Spreadsheets and Yahoo! Pipes also allow you to perform some extraction from other web sites.
How does a web scraper work?

Web scrapers are usually small pieces of code written in a programming language such as Python, Ruby or PHP. Choosing the right language is largely a question of which community you have access to: if there is someone in your newsroom or city already working with one of these languages, then it makes sense to adopt the same language.

While some of the click-and-point scraping tools mentioned before may be helpful to get started, the real complexity involved in scraping a web site is in addressing the right pages and the right elements within these pages to extract the desired information. These tasks aren’t about programming, but understanding the structure of the web site and database.

When displaying a web site, your browser will almost always make use of two technologies: HTTP is a way for it to communicate with the server and to request specific resource, such as documents, images or videos. HTML is the language in which web sites are composed.
The anatomy of a web page

Any HTML page is structured as a hierarchy of boxes (which are defined by HTML “tags”). A large box will contain many smaller ones — for example a table that has many smaller divisions: rows and cells. There are many types of tags that perform different functions — some produce boxes, others tables, images or links. Tags can also have additional properties (e.g. they can be unique identifiers) and can belong to groups called ‘classes’, which makes it possible to target and capture individual elements within a document. Selecting the appropriate elements this way and extracting their content is the key to writing a scraper.

Viewing the elements in a web page: everything can be broken up into boxes within boxes.

To scrape web pages, you’ll need to learn a bit about the different types of elements that can be in an HTML document. For example, the <table> element wraps a whole table, which has <tr> (table row) elements for its rows, which in turn contain <td> (table data) for each cell. The most common element type you will encounter is <div>, which can basically mean any block of content. The easiest way to get a feel for these elements is by using the developer toolbar in your browser: they will allow you to hover over any part of a web page and see what the underlying code is.

Tags work like book ends, marking the start and the end of a unit. For example <em> signifies the start of an italicized or emphasized piece of text and </em> signifies the end of that section. Easy.

An example: scraping nuclear incidents with Python

NEWS is the International Atomic Energy Agency’s (IAEA) portal on world-wide radiation incidents (and a strong contender for membership in the Weird Title Club!). The web page lists incidents in a simple, blog-like site that can be easily scraped.

To start, create a new Python scraper on ScraperWiki and you will be presented with a text area that is mostly empty, except for some scaffolding code. In another browser window, open the IAEA site and open the developer toolbar in your browser. In the “Elements” view, try to find the HTML element for one of the news item titles. Your browser’s developer toolbar helps you connect elements on the web page with the underlying HTML code.

Investigating this page will reveal that the titles are <h4> elements within a <table>. Each event is a <tr> row, which also contains a description and a date. If we want to extract the titles of all events, we should find a way to select each row in the table sequentially, while fetching all the text within the title elements.

In order to turn this process into code, we need to make ourselves aware of all the steps involved. To get a feeling for the kind of steps required, let’s play a simple game: In your ScraperWiki window, try to write up individual instructions for yourself, for each thing you are going to do while writing this scraper, like steps in a recipe (prefix each line with a hash sign to tell Python that this not real computer code). For example:

  # Look for all rows in the table
  # Unicorn must not overflow on left side.

Try to be as precise as you can and don’t assume that the program knows anything about the page you’re attempting to scrape.

Once you’ve written down some pseudo-code, let’s compare this to the essential code for our first scraper:

  import scraperwiki
  from lxml import html

In this first section, we’re importing existing functionality from libraries — snippets of pre-written code. scraperwiki will give us the ability to download web sites, while lxml is a tool for the structured analysis of HTML documents. Good news: if you are writing a Python scraper with ScraperWiki, these two lines will always be the same.

  url = "http://www-news.iaea.org/EventList.aspx"
  doc_text = scraperwiki.scrape(url)
  doc = html.fromstring(doc_text)

Next, the code makes a name (variable): url, and assigns the URL of the IAEA page as its value. This tells the scraper that this thing exists and we want to pay attention to it. Note that the URL itself is in quotes as it is not part of the program code but a string, a sequence of characters.

We then use the url variable as input to a function, scraperwiki.scrape. A function will provide some defined job — in this case it’ll download a web page. When it’s finished, it’ll assign its output to another variable, doc_text. doc_text will now hold the actual text of the website — not the visual form you see in your browser, but the source code, including all the tags. Since this form is not very easy to parse, we’ll use another function, html.fromstring, to generate a special representation where we can easily address elements, the so-called document object model (DOM).

  for row in doc.cssselect("#tblEvents tr"):
  link_in_header = row.cssselect("h4 a").pop()
  event_title = link_in_header.text
  print event_title

In this final step, we use the DOM to find each row in our table and extract the event’s title from its header. Two new concepts are used: the for loop and element selection (.cssselect). The for loop essentially does what its name implies; it will traverse a list of items, assigning each a temporary alias (row in this case) and then run any indented instructions for each item.

The other new concept, element selection, is making use of a special language to find elements in the document. CSS selectors are normally used to add layout information to HTML elements and can be used to precisely pick an element out of a page. In this case (Line. 6) we’re selecting #tblEvents tr which will match each <tr> within the table element with the ID tblEvents (the hash simply signifies ID). Note that this will return a list of <tr> elements.

As can be seen on the next line (Line. 7), where we’re applying another selector to find any <a> (which is a hyperlink) within a <h4> (a title). Here we only want to look at a single element (there’s just one title per row), so we have to pop it off the top of the list returned by our selector with the .pop() function.

Note that some elements in the DOM contain actual text, i.e. text that is not part of any markup language, which we can access using the [element].text syntax seen on line 8. Finally, in line 9, we’re printing that text to the ScraperWiki console. If you hit run in your scraper, the smaller window should now start listing the event’s names from the IAEA web site.

  figs/incoming/04-DD.png
  Figure 58. A scraper in action (ScraperWiki)

You can now see a basic scraper operating: it downloads the web page, transforms it into the DOM form and then allows you to pick and extract certain content. Given this skeleton, you can try and solve some of the remaining problems using the ScraperWiki and Python documentation:

Can you find the address for the link in each event’s title?

Can you select the small box that contains the date and place by using its CSS class name and extract the element’s text?

ScraperWiki offers a small database to each scraper so you can store the results; copy the relevant example from their docs and adapt it so it will save the event titles, links and dates.

The event list has many pages; can you scrape multiple pages to get historic events as well?

As you’re trying to solve these challenges, have a look around ScraperWiki: there are many useful examples in the existing scrapers — and quite often, the data is pretty exciting, too. This way, you don’t need to start off your scraper from scratch: just choose one that is similar, fork it and adapt to your problem.

Source: http://datajournalismhandbook.org/1.0/en/getting_data_3.html

Saturday, 30 July 2016

Best Alternative For Linkedin Data Scraping

Best Alternative For Linkedin Data Scraping

When I started my career in sales, one of the things that my VP of sales told me is that ” In sales, assumptions are the mother of all f**k ups “. I know the F word sounds a bit inappropriate, but that is the exact word he used. He was trying to convey the simple point that every prospect is different, so don’t guess, use data to come up with decisions.

I joined Datahut and we are working on a product that helps sales people. I thought I should discuss it with you guys and take your feedback.

Let me tell you how the idea evolved itself. At Datahut, we get to hear a lot of problems customers want to solve. Almost 30 percent of all the inbound leads ask us to help them with lead generation.

Most of them simply ask, “Can you scrape Linkedin for me”?

Every time, we politely refused.

But not anymore, we figured out a way to solve their problem without scraping Linkedin.

This should raise some questions in your mind.

1) What problem is he trying to solve?– Most of the time their sales team does not have the accurate data about the prospects. This leads to a total chaos. It will end up in a waste of both time and money by selling the leads that are not sales qualified.

2) Why do they need data specifically from Linkedin? – LinkedIn is the world’s largest business network. In his view, there is no better place to find leads for his business than Linkedin. It is right in a way.

3) Ok, then what is wrong in scraping Linkedin? – Scraping Linkedin is against its terms and it can lead to legal issues. Linkedin has an excellent anti-scraping mechanism which can make the scraping costly.

4) How severe is the problem? – The problem has a direct impact on the revenues as the productivity of the sales team is too low. Without enough sales, the company is a joke.

5) Is there a better way? – Of course yes. The people with profiles in LinkedIn are in other sites too. eg. Google plus, CrunchBase etc. If we can mine and correlate the data, we can generate leads with rich information. It will have better quality than scraping LinkedIn.

6) What to do when the machine intelligence fails? – We have to use human intelligence. Period!

Datahut is working on a platform that can help you get leads that match your ideal buyer persona. It will be a complete Business intelligence platform powered by machine and human intelligence for an efficient lead research & discovery.We named it Leadintel. We’ve also established some partnerships that help to enrich the data and saves the trouble of lawsuits.

We are opening our platform for beta users. You can request an invitation using the contact form. What do you think about this? What are your suggestions?

Thanks for reading this blog post. Datahut offers affordable data extraction services (DaaS) . If you need help with your web scraping projects let us know and we will be glad to help.

Source:http://blog.datahut.co/best-alternative-for-linkedin-data-scraping/

Tuesday, 12 July 2016

Content Scrapers – How to Find Out Who is Stealing Your Content & What to Do About It

If you have been blogging for a while, chances are you are familiar with content scrapers. Content scrapers are websites that steal your content for their own blogs without your permission. Some content scrapers will just copy the content off of your blog, but most use automated software that takes the content from your RSS feed and posts your content to their site like it is a new post.

In this post, we are going to look at some potential link building benefits to content scrapers, how to find out what sites are scraping your content, and what you can do if you want to either benefit from the linking standpoint or have them take it down.

Linking Benefits of Content Scrapers

Last week, I was happy to see that I was listed in ProBlogger’s 20 Bloggers to Watch in 2012. Within 24 hours, I received a notification in my WordPress dashboard that a page on my blog had been linked to in the post on ProBlogger’s site.

After receiving the original notification from the ProBlogger post, I also received another 18 trackbacks from sites that had stolen the content in their post verbatim. Trackbacks are WordPress’ way of letting you know that another website has linked to a post on your blog. In this case, these 18 sites had posted the content exactly like the original post – with the links back to my blog still intact.

It was then that I started contemplating the potential link building benefits of content scrapers. These are not by any means quality links – the highest Google PageRank was a PR 2 domain, many were stealing content in a variety of languages, and one even had the nerve to use some kind of redirection script to take away the link juice of outgoing links! So while these links didn’t have the same authority that the original post had, they still count as links.

How to Catch Content Scrapers

Unfortunately, unless you want to continuously search for your post titles in Google, you’ll only be able to easily track down sites that keep your in-content links active. If you want to know what websites are scraping your content, here are a few tips to sniff them out.

Copyscape

Copyscape is a simple search engine that allows you to enter the URL of your content to find out if there are duplicates of it on the Internet. You can get a few results using their free search, or you can pay for a premium account to check up to 10,000 pages on your site and more.

Trackbacks

The first way is through your trackbacks in WordPress (as shown in the image above). Many of these will show up in the spam folder if you use Akismet. The key to getting trackbacks to appear from content scrapers is to always include links to other posts in your content. Be sure those links have great anchor text too, if you’re going for a little extra link juice. And even if you are not, internal linking with strong anchor text is good for your on-site optimization too!

Anyone thinking about link building benefits at this point is probably noting the sheer volume of links from these sites, some of which are content scrapers. Essentially any site that is linking to a lot of your posts that isn’t a social network, social bookmarking site, or a die-hard fan who just loves linking to you is potentially a content scraper. You’ll have to go to their website to be sure. To find your links on their site, click on one of the domains to see the details of what pages on your site they are linking to specifically.

You can see here that they are just blatantly copying my posts titles. When I visited one of the links, sure enough, they are copying my entire posts in their full glory onto their site.

Google Alerts

If you don’t post often or want to keep up with any mentions of your top blog posts on other websites, you can create a Google Alert using the exact match for your post’s title by putting the title in quotation marks.

I deliver all of my Google Alerts to an RSS feed so I can manage them in Google Reader, but you can also have them delivered regularly by email. You’ll even get an instant preview of the types of results you will get.

How to Get Credit for Scraped Posts

If you use WordPress, then you definitely want to try out the RSS footer plugin. This plugin allows you to place a custom piece of text at the top or bottom of your RSS feed content.
As you can see, even if you aren’t using it for the purpose of getting credit back to your posts when content thieves steal it, you can still use it for a little extra bit of advertising with the possible benefit of people who subscribe to your RSS feed clicking through to your website or social profiles. And when someone does scrape your content from your RSS feed, it shows up there too

So in the event that someone finds your scraped content, they will hopefully notice the credit before assuming it was created by the blog that stole it. If you don’t have WordPress, you can simply include a note at the top or bottom of your content that includes the same information.

How to Stop Content Scrapers

If you’re not interested in anyone copying your content, then you have a few options to choose from. You can start by contacting the site that is stealing your content and sending them a notice that you want all of your content removed immediately. You can do this through the site’s contact form, email address, or post it to any social accounts they list.

If there is no contact information on the website stealing your content, you can do a Whois Lookup to (hopefully) find out who owns the domain.

If it is not privately registered, you should find an administrative contact’s email address. If not, you should at least see the domain registrar which, in this case, is GoDaddy and/or the hosting company for the website which, in this case, is HostGator. You can try to contact both companies (HostGator has a DMCA form and GoDaddy has an email) and let them know that the domain in question is stealing copyrighted content in hopes that the website will be suspended or removed.

You can also visit the DMCA and use their takedown services to remove anyone who is copying your photos, video, audio, blog, or other content. They even offer a WordPress plugin to incorporate a DMCA protected badge on your site to warn potential thieves.

Have you ever dealt with content scrapers and thieves? Do you leave it alone for the link benefits, or do you fight back? What other tools, services, or other preventative tactics do you use to block content scrapers? Please share your thoughts and experiences in the comments!

Source URL : https://blog.kissmetrics.com/content-scrapers/

Saturday, 9 July 2016

Data Scraping – Will Definitely Benefit a Business Startup

With increasingly data shared using internet, the data collected as well as the usage cases are increasing with an unbelievable pace. We’ve entered into the “Big Data” age and data scraping is among the resources to supply big data engines, the latest data for analytical analytics, contest monitoring, or just to steal the data.

From the technology viewpoint, competent data scraping is fairly complicated. It has many open-source projects that allow anybody to run a web data scraper through him. Nevertheless it’s the entire different story while it needs to be an interior of the business as well as that you require not only maintaining your scrapers but also scaling them as well as extract the data smartly as you need.

That is the reason why different services are selling the “data scraping” as service. Their work is taking care about all the technical characteristics so that you can have the data required without any industrial knowledge. Fundamentally all these startups pay attention for collecting the data and then extract its value for selling it to the customers.

Let’s take some examples:

• Sales Intelligence – The scrapers monitor competitors, marketplaces, online directories, and data from the public markets to discover leads. For instance, some tool’s track websites that drop or add JavaScript tags from the competitors therefore you can call them as eligible leads.
• Price Intelligence – A very ordinary use is the price monitoring. If this is in with e-commerce, travel, or property industry monitoring competitors’ prices as well as adjusting yours consequently is generally the key. All these services monitor the prices and using the analytical algorithms they may provide you advice about where the puck can be.
• Marketing – Data scraping may also be used for monitoring how the competitors are doing. From the reviews they have on the marketplaces to get coverage as well as financially published data one can find out a lot. Concerned about marketing, there is a development hacking class which teaches how to use scraping for the marketing objectives.

Finance intelligence, economic intelligence, etc have more and more financial, political, and economical data accessible online with the newer type of services that collect and add up of that, are increasing.

Let’s go through some points concerned with the market:

• It’s tough to evaluate how huge the data scraping market is as this is with the intersection of many big industries like sales, IT security, finance and marketing intelligence. This method is certainly a small part of all these industries however is expected to increase in the coming years.
• It’s a secured bet to indicate that increasingly SaaS will get pioneering applications for the web data scraping as well as progressively startups will use data scraping services from the safety viewpoint.
• As all the startups are generally entering huge markets using niche products / approaches (web data scraping isn’t a solution of everything, it’s more like a feature) they are expected to be obtained by superior players (within the safety, sales, or marketing tools industries). The technological barriers are also there.

Source URL : http://www.3idatascraping.com/data-scraping-will-definitely-benefit-a-business-startup.php

Friday, 8 July 2016

Data Scraping - What Are Hand-Scraped Hardwood Floors and What Are the Benefits?

If you love the look of hardwood flooring with lots of character, then you may want to check out hand-scraped hardwood flooring. Hand-scraped wood provides a warm vintage look, providing the floor instant character. These types of scraped hardwoods are suitable for living rooms, dining rooms, hallways and bedrooms. But what exactly is hand-scraped hardwood flooring?

Well, it is literally what you think it is. Hand-scraped hardwood flooring is created by hand using specialized wood working tools to make each board unique and giving an overall "old worn" appearance.

At Innovation Builders we offer solid wood floors finished on site with an actual hand-scraping technique followed by stain and sealer. Solid wood floors are installed by an expert team of technicians who work each board with skilled craftsman-like attention to detail. Following the scraping procedure the floor is stained by hand with a customer selected stain color, and then protected with multiple coats of sealing and finishing polyurethane. This finishing process of staining, sealing and coating the wood floors contributes to providing the look and durability of an old reclaimed wood floor, but with today's tough, urethane finishes.

There are many, many benefits to hand-scraped wood flooring. Overall, these floors are extremely durable and hard wearing, providing years of trouble-free use. These wood floors remain looking newer for longer because the texture that the process provides hides the typical dents, dings and scratches that other floors can't hide so easily. That's great news for households with kids, dogs, and cats.

These types of wood flooring have another unique advantage as well. When you do scratch these floors during their lifetime, the scratches are easily repaired. As long as the scratch isn't too deep you can make them practically disappear without ever having to hire a professional. It's simple to hide the scratch by using a color-matched stain marker or repair kit that is readily available through local flooring distributors. These features make hand-scraped hardwood flooring a lot more durable and hassle-free to maintain than other types of wood flooring.

The expert processes utilized in the creation of these floors provides a custom look of worn wood with deep color and subtle highlights. When the light hits the wood at different times during the day, it provides an understated but powerful effect of depth and beauty. They instantly offer your rooms a rustic look full of character, allowing your home to become a warm and inviting environment. The rustic look of this wood provides a texture, style and rustic appeal that cannot be matched by any other type of flooring.

Hand-Scraped Hardwood Flooring is a floor that says welcome and adds a touch of elegance to any home. If you are looking to buy a new home and you haven't had the opportunity to see or feel hand scraped hardwoods, stop in any of the model homes at Innovation Builders in Keller, North Richland Hills or Grand Prairie, Texas and check it out!

Source URL :   http://yellowpagesdatascraping.blogspot.in/2015/06/data-scraping-what-are-hand-scraped.html

Saturday, 18 June 2016

Scraping the Bottom of the Barrel - The Perils of Online Article Marketing

Many online article marketers so desperately wish to succeed, they want to dump corporate life and work for themselves out of their home. They decide they are going to create an online money making website. Therefore, they look around to see what everyone else is doing, and watch the methods others use to attract online buyers, and then they mimic their marketing, their strategies, and their business models.

Still, if you are copying what other people (less ethical people) are doing in online article marketing, those which are scraping the bottom of the barrel and using false advertising and misrepresentations, then all you are really doing is perpetuating distrust on the Internet. Therefore, you are hurting everyone, including people like me. You must realize that people like me don't appreciate that.

Let me give you a few examples of some of the things going on out there, thing that are being done by people who are ethically challenged. Far too many people write articles and then on their byline they send the Internet surfer or reader of the article to a website that has a squeeze page. The squeeze page has no real information on it, rather it asks for their name and e-mail address.

If the would-be Internet surfer is unwise enough to type in their name and email address they will be spammed by e-mail, receiving various hard-sell marketing pieces. Then, if the Internet Surfer does decide to put in their e-mail address, the website grants them access and then takes them to the page with information about what they are selling, or their online marketing "make you a millionaire" scheme.

Generally, these are five page sales letters, with tons of testimonials of people you've never heard of, and may not actually exist, and all sorts of unsubstantiated earnings claims of how much money you will make if you give them $39.35 by way of PayPal, for this limited offer "Now!" And they will send you an E-book with a strategic plan of how you can duplicate what they are doing. The reality is whatever they are doing is questionable to begin with.

Source URL  : http://ezinearticles.com/?Scraping-the-Bottom-of-the-Barrel---The-Perils-of-Online-Article-Marketing&id=2710103

Thursday, 12 May 2016

Beginner’s guide to Web Scraping in Python (using Beautiful Soup)

Introduction

The need and importance of extracting data from the web is becoming increasingly loud and clear. Every few weeks, I find myself in a situation where we need to extract data from the web. For example, last week we were thinking of creating an index of hotness and sentiment about various data science courses available on the internet. This would not only require finding out new courses, but also scrape the web for their reviews and then summarizing them in a few metrics! This is one of the problems / products, whose efficacy depends more on web scrapping and information extraction (data collection) than the techniques used to summarize the data.

Ways to extract information from web

There are several ways to extract information from the web. Use of APIs being probably the best way to extract data from a website. Almost all large websites like Twitter, Facebook, Google, Twitter, StackOverflow provide APIs to access their data in a more structured manner. If you can get what you need through an API, it is almost always preferred approach over web scrapping. This is because if you are getting access to structured data from the provider, why would you want to create an engine to extract the same information.

Sadly, not all websites provide an API. Some do it because they do not want the readers to extract huge information in structured way, while others don’t provide APIs due to lack of technical knowledge. What do you do in these cases? Well, we need to scrape the website to fetch the information.

There might be a few other ways like RSS feeds, but they are limited in their use and hence I am not including them in the discussion here.

What is Web Scraping?

Web scraping is a computer software technique of extracting information from websites. This technique mostly focuses on the transformation of unstructured data (HTML format) on the web into structured data (database or spreadsheet).

You can perform web scrapping in various ways, including use of Google Docs to almost every programming language. I would resort to Python because of its ease and rich eocsystem. It has a library known as ‘Beautiful Soup’ which assists this task. In this article, I’ll show you the easiest way to learn web scraping using python programming.

For those of you, who need a non-programming way to extract information out of web pages, you can also look at import.io . It provides a GUI driven interface to perform all basic web scraping operations. The hackers can continue to read this article!

Libraries required for web scraping

As we know, python is a open source programming language. You may find many libraries to perform one function. Hence, it is necessary to find the best to use library. I prefer Beautiful Soup (python library), since it is easy and intuitive to work on. Precisely, I’ll use two Python modules for scraping data:

Urllib2: It is a Python module which can be used for fetching URLs. It defines functions and classes to help with URL actions (basic and digest authentication, redirections, cookies, etc). For more detail refer to the documentation page.

Beautiful Soup: It is an incredible tool for pulling out information from a webpage. You can use it to extract tables, lists, paragraph and you can also put filters to extract information from web pages. In this article, we will use latest version Beautiful Soup 4. You can look at the installation instruction in its documentation page.

Beautiful Soup does not fetch the web page for us. That’s why, I use urllib2 in combination with the BeautifulSoup library.

Python has several other options for HTML scraping in addition to Beatiful Soup. Here are some others:

    -mechanize
    -scrapemark
    -scrapy

Basics – Get familiar with HTML (Tags)

While performing web scarping, we deal with html tags. Thus, we must have good understanding of them.                     
 you already know basics of HTML, you can skip this section. Below is the basic syntax of HTML:
  This syntax has various tags as elaborated below:

    <!DOCTYPE html> : HTML documents must start with a type declaration
      HTML document is contained between <html> and </html>
      The visible part of the HTML document is between <body> and </body>
       HTML headings are defined with the <h1> to <h6> tags
       HTML paragraphs are defined with the <

Scrapping a web Page using Beautiful Soup

Here, I am scraping data from a Wikipedia page. Our final goal is to extract list of state, union territory capitals in India. And some basic detail like establishment, former capital and others form this wikipedia page. Let’s learn with doing this project step wise step:

Import necessary libraries:

#import the library used to query a website
import urllib2
#specify the url
wiki = "https://en.wikipedia.org/wiki/List_of_state_and_union_territory_capitals_in_India"
#Query the website and return the html to the variable 'page'
page = urllib2.urlopen(wiki)
#import the Beautiful soup functions to parse the data returned from the website
from bs4 import Beautiful Soup
#Parse the html in the 'page' variable, and store it in Beautiful Soup format
soup = Beautiful Soup(page)

Use function “prettify” to look at nested structure of HTML page

Above, you can see that structure of the HTML tags. This will help you to know about different available tags and how can you play with these to extract information.

Work with HTML tags

    soup.<tag>: Return content between opening and closing tag including tag.
    In[30]:soup.title
    Out[30]:<title>List of state and union territory capitals in India - Wikipedia, the free encyclopedia</title>
    soup.<tag>.string: Return string within given tag
    In [38]:soup.title.string
    Out[38]:u'List of state and union territory capitals in India - Wikipedia, the free encyclopedia'

Find all the links within page’s <a> tags::  We know that, we can tag a link using tag “<a>”. So, we should go with option soup.a and it should return the links available in the web page. Let’s do it.

    In [40]:soup.a
    Out[40]:<a id="top"></a>

Above, you can see that, we have only one output. Now to extract all the links within <a>, we will use

Above, it is showing all links including titles, links and other information.  Now to show only links, we need to iterate over each a tag and then return the link using attribute “href” with get.

Find the right table: As we are seeking a table to extract information about state capitals, we should identify the right table first. Let’s write the command to extract information within all table tags.

all_tables=soup.find_all('table')

Now to identify the right table, we will use attribute “class” of table and use it to filter the right table. In chrome, you can check the class name by right click on the required table of web page –> Inspect element –> Copy the class name OR go through the output of above command find the class name of right table.

right_table=soup.find('table', class_='wikitable sortable plainrowheaders')

\right_table

Extract the information to DataFrame: Here, we need to iterate through each row (tr) and then assign each element of tr (td) to a variable and append it to a list. Let’s first look at the HTML structure of the table (I am not going to extract information for table heading <th>)
Above, you can notice that second element of <tr> is within tag <th> not <td> so we need to take care for this. Now to access value of each element, we will use “find(text=True)” option with each element.  Let’s look at the code

#Generate lists

A=[]
B=[]
C=[]
D=[]
E=[]
F=[]
G=[]
for row in right_table.findAll("tr"):

    cells = row.findAll('td')
    states=row.findAll('th') #To store second column data
    if len(cells)==6: #Only extract table body not heading
        A.append(cells[0].find(text=True))
        B.append(states[0].find(text=True))
        C.append(cells[1].find(text=True))
        D.append(cells[2].find(text=True))
        E.append(cells[3].find(text=True))
        F.append(cells[4].find(text=True))
        G.append(cells[5].find(text=True))

#import pandas to convert list to data frame

import pandas as pd
df=pd.DataFrame(A,columns=['Number'])
df['State/UT']=B
df['Admin_Capital']=C
df['Legislative_Capital']=D
df['Judiciary_Capital']=E
df['Year_Capital']=F
df['Former_Capital']=G
df

Similarly, you can perform various other types of web scraping using “Beautiful Soup“. This will reduce your manual efforts to collect data from web pages. You can also look at the other attributes like .parent, .contents, .descendants and .next_sibling, .prev_sibling and various attributes to navigate using tag name. These will help you to scrap the web pages effectively.-

But, why can’t I just use Regular Expressions?

Now, if you know regular expressions, you might be thinking that you can write code using regular expression which can do the same thing for you. I definitely had this question. In my experience with Beautiful Soup and Regular expressions to do same thing I found out:

Code written in Beautiful Soup is usually more robust than the one written using regular expressions. Codes written with regular expressions need to be altered with any changes in pages. Even Beautiful Soup needs that in some cases, it is just that Beautiful Soup is relatively better.

Regular expressions are much faster than Beautiful Soup, usually by a factor of 100 in giving the same outcome.

So, it boils down to speed vs. robustness of the code and there is no universal winner here. If the information you are looking for can be extracted with simple regex statements, you should go ahead and use them. For almost any complex work, I usually recommend BeautifulSoup more than regex.

End Note

In this article, we looked at web scraping methods using “Beautiful Soup” and “urllib2” in Python. We also looked at the basics of HTML and perform the web scraping step by step while solving a challenge. I’d recommend you to practice this and use it for collecting data from web pages.


 Source : http://www.analyticsvidhya.com/blog/2015/10/beginner-guide-web-scraping-beautiful-soup-python/

Monday, 25 April 2016

Taking a cue from the Ryanair screen scraping judgment

Screen scraping is the best way to aggregate web data much faster than a human possibly can. However, is there any such thing known as ethical web scraping?

Well it’s not scraping in itself that is good or bad. What matters is how you use the scraped data. It would be extremely unethical to steal data, republish it or use it to cause harm to a business. This was clearly established when the EU passed a judgment in favour of Ryanair.

The EU’s highest court passed a judgment in favour of Ryan air, and this will positively prevent people from scraping and using others data in an unethical manner. Ryanair had claimed that PR Aviation scraped its data regarding the flight scheduled and then used it to allow people book Ryanair flights via the PR Aviation website and this is clearly an infringement of database rights.

Source:http://www.habiledata.com/blog/taking-cue-ryanair-screen-scraping-judgment/