Extracting fixtures

I’m still using fixtures. Shame, I know.

Why I do use them instead of Factory Girl or other solution like that? Well, fixtures can be much more closer to real data than mocks from Factory. How come, You ask? Fixtures are imaginary data exactly like mocks from other sources!

My answer is: that depends how You create fixtures. If You create them by hand, indeed they are disconnected from real world (like all mocks).

What is Your real data/mocks ratio?

What is Your real data/mocks ratio? CC by hsing

But I prefer to extract fixtures from real (production) database. That way I can easily pick entries created by users which are edge cases. The only trouble is with creating fixtures. For some time I’m using modified extract_fixtures rake task. I have added some conditions to extracting process – SQL condition to select only particular records and adjusted syntax to recent rake.

This is useful especially when You are about to take over application code which has no tests. Extracting real data is quick way to start write integration tests (in such case they have are most efficient – time invested and application code coverage).

How to extract fixtures without pain?

Continue reading

Logger – simple, yet powerful Rails debugging tool

I don’t know about You, but logs are for me most powerful debugging tool. Placing many logger.debug or logger.info can quickly provide info what is happening inside Rails application.

This approach is especially useful when something wrong is happening and trigger is unknown. Placing many logging directives can provide data for analysis what could be a reason.

Logbook

CC by Admond

Default Rails logger has one serious flaw which makes logs on production sites almost useless – messages are not grouped in calls. If You have many processes of Rails running and logging to single file, some requests will be processed in parallel and You have log entries mixed. With default log format there is no way to say which entry is from which process.

Since Rails 2.0 we have ActiveSupport::BufferedLogger, but it solves other problem – number of disk writes and file locks – You can set after how many entries log will be flushed to disk.

AnnotatedLogger

Here comes AnnotatedLogger for a rescue. The idea is to take each message and prefix it with PID of Rails process. As long You don’t run Rails in multithread mode, this is unique ID which will make log entry distinguishable.

class AnnotatedLogger < Logger
  
  def initialize *args
    super *args

    [:info, :debug, :warn, :error, :fatal].each {|m|
      AnnotatedLogger.class_eval %Q|
      def #{m} arg=nil, &block
        pid = "%.5d:" % $$
        if block_given?
          arg = yield
        end
        super "%s %s" % [pid, arg.gsub(/\n/,"\n%s" % pid)]
      end
      |

    }
  end
end

Now in Rails::Initializer.run do |config| section of config/environment.rb define AnnotatedLogger as default Rails logger:

 config.logger = AnnotatedLogger.new "log/#{RAILS_ENV}.log"

Of course You can add other data to log entries (timestamp?). Here is an example of log entries:

24551:Processing SearchController#processing (for [FILTERED] at 2009-09-15 15:11:24) [GET]
24551:   Session ID: df260892836fc619ec666f894e7d8e88
24551:   Parameters: {[FILTERED]}
24542:   Airport Load (0.216460)   SELECT * FROM [FILTERED]
24542: Completed in 0.24903 (4 reqs/sec) | Rendering: 0.01298 (5%) | DB: 0.22554 (90%) | 200 OK [FILTERED]
24551:   Search Columns (0.004711)   SHOW FIELDS FROM [FILTERED]
24551: Rendering template within layouts/blank
24551: Rendering search/processing

Without PIDs You would expect that Airport Load entry is part of SearchController#processing for session with ID df260892836fc619ec666f894e7d8e88. In reality this is output from processing different request.

What else?

This how I do deal with logs from Rails application. Do You have other ideas how to make logging more usable not only in development environment?


PS
I have just other idea – probably You could use BufferedLogger with disabled auto flushing and patch ActionController to flush manually all entries after request was processed – then all messages would be dumped in single block.

Testing binary downloads with Webrat

I’m using Webrat to keep some sanity when approaching maintenance of new application. Customers often come to me with legacy code, which somehow is not covered by tests.

In such case integration tests are way to go, since they provide most bang of Yours bucks – each written test could cover many parts of application.

I had to create test for testing download some data in CSV format (have You said binary? :) ). With default matchers from Webrat You won’t be able to write effective assertions – and that why I’m referring to such file as binary.

up-down-load

So how to do it? Here is a quick tip

Use Webrat’s response_body to get raw body returned by application. Like that:

click_link "Get me some CSV data"
ret = CSV.parse response_body
assert_equal(
  2, 
  ret[2][5].to_f, 
  "In third row and sixth column You should have 2 and there is #{ret[2][5]}"
)