I have a relatively short list of cities which I want to plot on a world map. The list is a little too long for a manual lookup, but I don’t know exactly how I’ll use it in an app, so I figured out how to do it with some simple ruby in irb using the lovely geocoder gem.
I had my cities in a spreadsheet. Selected a few cells, and was able to simply paste into irb and split the string on newlines to get an array of the cities.
gem install geocoder
>> require 'geocoder
>> a = "Honolulu, HI
>> Boston, MA
>> New York, NY".split("\n")
=> ["Honolulu, HI", "Boston, MA", "New York, NY"]
>>a.map do |city|
d = Geocoder.search(city)
ll = d.data["geometry"]["location"]
Honolulu, HI 21.3069444 -157.8583333
Boston, MA 42.3584308 -71.0597732
New York, NY 40.7143528 -74.00597309999999
Then I could copy/paste the irb output back into my spreadsheet. Ta Da!
It appears that the free Google API has some kind of throttling, so this is only good for short lists of <20 cities.
I think it is important to actually understand the code that I copy/paste, so I took a little time to read up on the details which I’ve summarized below.
Favorite Testing Gems
I won’t elaborate on RSpec, the concepts in this post likely applies to test-unit as well. I written before about why RSpec is my favorite.
Capybara is the world’s largest rodent, and the Ruby community is also the name of a favorite gem for testing the content of web pages. (A successor to Bryan Helnkamp‘s WebRAT, so named for Ruby Acceptance Testing, which instigated the rodent naming theme) Capybara is wonderful with its support for many “drivers” which allow for a consistent API across different solutions that offer different levels of browser support, with different performance characteristics.
One of its creators, Joe Ferris, explains how it works (via stackoverflow)
Capybara boots up your rack application using webrick or thin in a background thread.
The main thread sets up the driver, providing the port the rack application is running on.
Your tests ask the driver to interact with the application, which causes the fake web browser to perform requests against your application.
The DatabaseCleaner gem is super helpful for our typical Rails app that relies on a database. We always want a “clean slate” when we start our tests and this nifty gem gives us a bunch of options with a consistent interface for various database choices.
To configure these solutions correctly, it is critical to understand that with Capybara::Webkit our target app code is running in a separate process from our tests. This means that when we set up our test data RSpec is running in one process and needs to actually write to the database, then our app code reads from the database from another process. Wheras with Rack::Test, the tests and the target code runs in the same process. That’s why we can’t use a “transaction” strategy to reset our test environment with Capybara::Webkit. Instead we use the “truncation” strategy, which simple blows away all of the data after each test run.
Why bother with transactions?
Truncation will work just as well with Rack::Test as transcations, so why introduce the complexity of two different configurations? The Database Cleaner README explains: “For the SQL libraries the fastest option will be to use :transaction as transactions are simply rolled back.” Sarah Mei elaborated on this by reminding me that the commit to the database is what takes the most time, and the transaction is never committed, it is simply rolled back at the end of your test. Transactions are pretty speedy, so we want to only use the truncation method when absolutely necessary.
config.use_transactional_fixtures = true
config.before(:each, js: true) do
self.use_transactional_fixtures = false
DatabaseCleaner.strategy = :truncation
config.after(:each, js: true) do
self.use_transactional_fixtures = true
How does this work exactly?
We are set up to use transactions by default, which is built into rspec-rails and does not rely on DatabaseCleaner. Then, for our JS tests, we tell RSpec not to use transactions and instead instruct DatabaseCleaner to set up before each test runs with DatabaseCleaner.start and then clean up after with DatabaseCleaner.clean.
I have no idea why ActiveRecord::Base.establish_connection is needed, but if we don’t do that, then rake spec hangs after my first JS test with this ominous warning:
WARNING: there is already a transaction in progress
Perhaps someone reading this can explain this detail, but happy to have a configuration that works and hope this helps other folks who want fast tests that run reliably.
I’ve decided to use figaro which allows me to easily configure my API keys without committing them to my source repo, which is very helpful when posting open source code. We need to set up the app for an API key in order to auth with Twitter.
Click “Create a new application” and fill in the form. I called my app blue-parakeet for uniqueness — you’ll have to make up your own name.
Make sure you put in a callback URL, even though you won’t use it for development (since omniauth tells twitter the callback URL to override this setting) — if you don’t supply one you will get a 401 unauthorized error.
Read and Accept the Terms, then click “Create Your Twitter Application”
Now you have a “key” and “secret” (called “consumer key” and “consumer secret”) which you will need to configure your rails app.
Using Figaro gem for Configuring API keys
# config via Figaro gem, see: https://github.com/laserlemon/figaro
# rake figaro:heroku to push these to Heroku
Rails.application.config.middleware.use OmniAuth::Builder do
provider :twitter, ENV['TWITTER_KEY'], ENV['TWITTER_SECRET']
Now Omniauth is already setup to auth with twitter. Let’s run the server. Install mongo with brew install mongodb if you haven’t already. Also, if you don’t have mongo set up to run automatically at startup, then run Mongo:
However, when we authenticate, we get an error, since we have’t configured our routes yet:
Create a Sessions Controller, Add Routes
Next step is a sessions controller and a route for the OAuth callback. We’ll make a placeholder create action that just reports the auth info we get back from Twitter.
On the command line:
rails generate controller sessions
Edit the newly created file, app/controllers/sessions_controller.rb
class SessionsController request.env["omniauth.auth"]
add the following to config/routes.rb
get '/auth/:provider/callback' => 'sessions#create'
get '/auth/failure' => 'sessions#failure'
get '/signout' => 'sessions#destroy', :as => :signout
root :to => redirect("/auth/twitter") # for convenience
Now go to http://localhost:3000/auth/twitter — after authenticating with Twitter, you will see the user info that Twitter sends to the app from the authentication request (see docs for explanation of each field). The general stuff which is more consistent across providers is in the ‘info’ section, and most of the interesting twitter-specific info is in the “extra” section:
For this app, we’ll use a simple user model, just to show that there’s no magic here — we’re only using Twitter auth not storing our own passwords, so we don’t really need the full features of the lovely Devise gem.
rails generate scaffold user provider:string uid:string name:string
Add to app/models/user.rb
create! do |user|
user.provider = auth['provider']
user.uid = auth['uid']
user.name = auth['info']['name'] || ""
With Rails 4 the recommended pattern to lock down model attributes that we don’t want changed from form submits (or malicious attacks) is in the controller. In app/controllers/users_controller.rb change:
params.require(:user).permit(:provider, :uid, :name)
and then remove the corresponding fields from app/views/users/_form.html.erb
Finally, the real create action for the sessions controller, plus a destroy action for the /signout url we defined earlier:
With this app, we’ve got a basic understanding to Twitter OAuth using Rails 4 and the OmniAuth gem. We didn’t actually do anything specific to MongoDB and no testing yet. It is important to understand the technology we’re working with before testing or even writing production code.