Scrapebox is inarguably one of the most important SEO tools ever. It has a lot of useful features that almost every serious SEO needs.
However, if you’re completely new to Scrapebox and even SEO, you may want to get thorough with some of the basics first before you move forward.
That being said, here are some of the most important features of Scrapebox that you may be interested in or would need to attract traffic to your website(s) through search engines.
Harvesting URLs
You can harvest a huge number of URLs using Scrapebox that are related to the keywords that you may be targeting or the niche you’re based in.
Well, we know that hardly made any sense to you, but don’t worry, we will be going much deeper into it and you will probably clearly understand what we are talking about by the time we are finished explaining it.
Firstly, by harvesting URLs, we mean searching the search engines like Google, Yahoo, Bing, and more (most usually use only Google, though) for your particular keywords/niche.
Let us give you a simple example. Let’s say you’re a reputed business selling chairs offline, and now want to expand by taking it online. So basically, you would be interested in selling chairs online.
So in this case, you would want to harvest URLs of sites related to chairs. More on why we need to do that in a bit.
Now, before you go ahead and simply start harvesting URLs, you need to know a few things. Firstly, if you’re running bigger campaigns and would like to harvest a lot of URLs very quickly (basically need quick results), you may want to get a bunch of proxies.
Proxies act as a middleman between you and the search engines. Look at it this way. It’s obviously not possible for someone to search for a ton of queries within a minute or so. Hence, the search engines will detect that someone is trying to spam them, and they will block your IP.
While using proxies, however, the queries would be searched for using many different IPs, making everything look natural and not result in the ban of your personal IP.
You may also want to note that the format to enter the proxies in the software is IP:Port:Username:Password. Don’t worry about it though, as these details will be provided to you by the seller you get your proxies from.
You would also have to make sure the proxies are working and not dead. Scrapebox comes with a built-in feature that allows you to do that. Take a look at the images below.
Once you’re sure they are working fine, you can simply save them as shown in the image below.
You can then start harvesting the URLs related to your niche. You would also have to enter “footprints”, which will help fetch relevant results.
Footprints simply refer to the terms that tend to appear on the sites you’re looking to harvest. So for example, if you enter “chairs” as a footprint, the harvested URLs would include sites that are related to this keyword.
What to do with the harvested URLs?
Well, as promised earlier, we will now be getting into why you need to harvest URLs and how exactly to use them.
To put simply, these harvested URLs would be relevant to your niche (given you had entered the right footprints/keywords), and can be used to improve the SEO position of your website to get more traffic from the search engines.
Now, let us get into how exactly would you use these URLs to get more visitors to your website that are interested in your niche and what you’re offering.
Blog commenting
You can use Scrapebox to do mass blog commenting by harvesting URLs of blogs relevant to your niche. The software would leave a comment with your website URL included, hence allowing you to get targeted traffic from those blogs as well as improving your search engine rankings.
Doing competitor research
You can pick some successful competitors’ websites from the harvest URL list and analyze them, helping you to discover effective strategies you can apply to your own website as well.
Keyword research
You can even use Scrapebox to do keyword research as well using the harvested list. This will probably help you find keywords that are less competitive and hence easy to target.
And a lot more!
The above given are just some of the things you can do with Scrapebox. There’s actually a lot more to it.
Now, although it can be intimidating at first, you would probably get the hang of it really quickly with the right type of resources at your hand.
Finding expired domains with Scrapebox
Well, this is something that many marketers actually use the software for. Although there are still many uses of Scrapebox, you can no longer use it to mass spam and get great results from search engines like you could in the past.
That doesn’t make it any less useful, though, especially if you know how to use it the right way.
Now, coming to finding expiring domains using Scrapebox, let us first understand what expired domains are and why on earth would you want to register domains abandoned by their past owner.
What are expired domains?
As the name suggests, expired domains are simply those domains that have expired, or in other words, not renewed by their owners. So for example, let’s say you bought a domain and paid the registration cost for a year. However, you didn’t renew it again when it was about to expire, and so you lost ownership of the domain.
Now this domain would be called an “expired” domain, simply because it has expired but wasn’t renewed.
Why would you want to get expired domains?
You see, such domains can be extremely valuable as far as SEO is concerned. There are various factors for it, with some of the most important ones being mentioned below.
- They can be very aged domains (meaning they are in existence from a very long time)
- They may have huge authority and other reputed sites linking to them
- They may have gained authority in the eyes of the search engines
Now let us go a little deeper into the above mentioned points. Firstly, aged domains are considered more valuable for SEO, simply because they have been around from a long time and hence get more trust from search engines like Google than brand new domains.
Secondly, as they may have been up and running for quite a few years, they may also have managed to get “backlinks” from many websites, including huge authority sites, as well as well known reputed sites. Whenever a site links to another website, it’s called a backlink.
So for example, if a domain has been mentioned in the “references” of a particular topic on Wikipedia, that domain has got a “backlink” from Wikipedia.
Having backlinks from such authoritative sites go a long way in increasing the SEO power of a domain. Such domains thus become very useful when it comes to getting traffic to your website through SEO.
How to find expired domains using Scrapebox?
Now that you know what an expired is and why it makes sense to get them, let’s get into how to find them using the SEO community’s most beloved software, Scrapebox.
Without getting into much detail, you can simply choose the sites you would want your expired domains to have backlinks from. However, make sure they are authority sites, such as HuffingtonPost, Wikipedia, Mashable, New York Times, and more.
So for example, if you want to have domains that have expired but had links from the New York Times, you may first want to scrape a big list of URLs of its older posts.
Why older posts? It’s simply because the latest posts are very unlikely to link to expired domains for obvious reasons. However, some of the older posts the website linked to may have expired, allowing you to re-register them and use for SEO.
So you can simply scrape such websites using Scrapebox, and it will give out the older posts URLs which you’re looking for. You then need to check for the domains these URLs link out to, by putting them in Scrapebox’s outbound link checker.
Although this may fetch thousands of domains, it would still be very quick, as Scrapebox is capable of harvesting/checking thousands of domains every second, though it also depends on the number of proxies used, how fast they are, and so on.
Now, without doing anything to the URL list Scrapebox’s outbound link checker gave out, you can simply put them in its domain availability checker addon. You will quickly get the domains that are available to register. However, before registering any of the domains that have got a backlink from the website and are also available to register, you may want to check their overall backlink profile.
This helps you ensure that there are no spammy backlinks pointing to the domain you’re registering, and that the other links are good quality as well. You can use Scrapebox’s page authority addon to do this by simply putting the available domains list into it.
Domains having a high page and domain authority are usually considered to be pretty good expired domains.
A final word
While Scrapebox may seem to have a steep learning curve, things may turn out to be way easier than you may think once you get the hang of it.
After playing around with it for a few days and reading useful articles like this one, you’re probably sure to feel comfortable with using it and taking your online business to newer heights.