Searching for new domain names drives me insane. I want donuts.com, but I usually end up buying affordabledonutsindallastexas.com after wasting more hours searching than I care to admit. I’m not creative or stubborn enough to sit around and come up with names myself, but I can code, so we are going to solve the issue that way.
We can make finding a domain name easier by creating a domain name generator. It will take a keyword, scrape a thesaurus website to find synonyms for that word, then check the availability of that domain name using Whois.
Also, please note that scraping websites is normally against their terms of service. After using this method for a few weeks, I could no longer reach the website I used to find synonyms. I could access it with a VPN, so I’m just assuming they blocked my IP address. That’s pretty easy to get around, but I figured it was best not to give them anymore trouble. For that reason, I will replace the site I used with a placeholder. If you can find a service that allows you to look up synonyms through an API, that would be best.
Okay, enough talking! Here is the code.
require 'open-uri'
require 'nokogiri'
require 'optparse'
require 'whois-parser'
require 'whois'
options = {}
OptionParser.new do |opts|
opts.banner = "Usage: example.rb [options]"
opts.on("-w", "--words \"one, two, three\"") do |w|
options[:words] = w
end
end.parse!
class MyDomain
attr_accessor :valid_domian_names
def initialize(urls = [])
@words = []
urls.each do |f|
fetch_words(fetch_html("https://www.THESAURUS_SITE_PLACEGOLDER.com/#{f}/synonyms"))
end
@valid_domain_names = []
find_valid_domain_names
save_to_file
end
private
# Open document and structure HTML
def fetch_html(url)
Nokogiri::HTML(open(url).read)
end
# Create array of potential domain names through thesaurus
def fetch_words(html)
html.css('.ELEMENT_CONTAINING_SYNONYM').each { |w| @words << w.children.to_s }
end
# Test if words gained through thesaurus are available domain name
def find_valid_domain_names
clean_words
word_count = @words.count
@words.each_with_index do |f,i|
begin
puts "Word #{i} of #{word_count}"
puts 'Checking: ' + f + '.com'
@valid_domain_names << "#{f}.com" if Whois.available?("#{f}.com")
rescue StandardError => e
puts "Could not check domain name. Going to sleep... zZzZzZz"
end
sleep(2)
end
end
# Remove any strings with a space
def clean_words
@words.map! {|f| f.gsub(/\s/,'').gsub("'",'').gsub('-','')}.uniq!
end
# Create file and save domain names
def save_to_file
file_time = Time.now.strftime("%d/%m/%Y %H:%M").gsub('/','_').gsub(/\s+/, "").gsub(':','')
File.open("./available_domains_#{file_time}.txt", 'w') {|f| f.write(@valid_domain_names.uniq.join("\n")) }
end
end
MyDomain.new(options[:words].split(','))
For those who are still learning Ruby, let's now go into a little more depth! First, we will import our dependencies.
require 'open-uri'
require 'nokogiri'
require 'optparse'
require 'whois-parser'
require 'whois'
Open-uri will allow us to make a GET request to the website we intend to scrap and then return HTML. We can then easily parse the HTML with Nokogiri. Optparse is what we will use to handle variables passed through the command line. Whois-parser and Whois will be used to check if a domain name is available.
Next we will handle the variables passed through the command line like mentioned above.
options = {}
OptionParser.new do |opts|
opts.banner = "Usage: example.rb [options]"
opts.on("-w", "--words \"one, two, three\"") do |w|
options[:words] = w
end
end.parse!
Instead of having to edit our script every time we want to search for a new word, this code allows us to just pass it when we call the script. We can now do this: "ruby our_script.rb -w donut".
This next bit of code is fairly simple. In our initialize function we make our GET request for each word passed through the command line. Fetch_html will get the raw HTML, which we will then pass to fetch_words. Fetch_words will parse the HTML and grab the words and terms we want to check the availability of. We then take all of the words and store them in the @words variable (an array) which we defined in our initialize funtion.
attr_accessor :valid_domian_names
def initialize(urls = [])
@words = []
urls.each do |f|
fetch_words(fetch_html("https://www.THESAURUS_SITE_PLACEGOLDER.com/#{f}/synonyms"))
end
@valid_domain_names = []
find_valid_domain_names
save_to_file
end
private
# Open document and structure HTML
def fetch_html(url)
Nokogiri::HTML(open(url).read)
end
# Create array of potential domain names through thesaurus
def fetch_words(html)
html.css('.ELEMENT_CONTAINING_SYNONYM').each { |w| @words << w.children.to_s }
end
Now that we have all of our words, it's time to check if they are available. In the find_valid_domain_names function, we first clean our words. We do this because the website I used also included terms that were synonymous with the word I was searching for. I don't want to get rid of those terms, so we will remove any dashes or spaces before checking if they are available. Next, we will loop through the @words variable and check if the current word is available. We do this with Whois, which we imported at the top of the script. In our initialization function shown above, the last thing we do is save the file. In this case, we just save it as a text file to our current directory.
# Test if words gained through thesaurus are available domain name
def find_valid_domain_names
clean_words
word_count = @words.count
@words.each_with_index do |f,i|
begin
puts "Word #{i} of #{word_count}"
puts 'Checking: ' + f + '.com'
@valid_domain_names << "#{f}.com" if Whois.available?("#{f}.com")
rescue StandardError => e
puts "Could not check domain name. Going to sleep... zZzZzZz"
end
sleep(2)
end
end
# Remove any strings with a space
def clean_words
@words.map! {|f| f.gsub(/\s/,'').gsub("'",'').gsub('-','')}.uniq!
end
# Create file and save domain names
def save_to_file
file_time = Time.now.strftime("%d/%m/%Y %H:%M").gsub('/','_').gsub(/\s+/, "").gsub(':','')
File.open("./available_domains_#{file_time}.txt", 'w') {|f| f.write(@valid_domain_names.uniq.join("\n")) }
end
I hope that was a clear enough explanation. If you have any questions about the code or how to use it, use the contact form on this website and I will get back to you!