All opinions expressed are those of the authors and not necessarily those of OSNews.com, our sponsors, or our affiliates.
  Add to My Yahoo!  Subscribe with Bloglines  Subscribe in NewsGator Online

published by noreply@blogger.com (Ben Witten) on 2017-01-25 14:50:00 in the "Conference" category
Last week, Smartrac exhibited at the retail industry?s BIG Show, NRF 2017, using a Liquid Galaxy with custom animations to showcase their technology.

Smartrac provides data analytics to retail chains by tracking physical goods with NFC and Bluetooth tags that combine to track goods all the way from the factory to the distribution center to retail stores. It's a complex but complete solution. How best to visualize all that data and show the incredible value that Smartrac brings? Seven screens with real time maps in Cesium, 3D store models in Unity, and browser-based dashboards, of course. End Point has been working with Smartrac for a number of years as a development resource on their SmartCosmos platform, helping them with UX and back-end database interfaces. This work included development of REST-based APIs for data handling, as well as a Virtual Reality project utilizing the Unity game engine to visualize data and marketing materials directly on several platforms including the Oculus Rift, the Samsung Gear 7 VR, and WebGL. Bringing that back-end work forward in a highly visible platform for the retail conference was a natural extension for them, and the Liquid Galaxy fulfilled that role perfectly. The large Liquid Galaxy display allowed Smartrac to showcase some of their tools on a much larger scale.

For this project, End Point deployed two new technologies for the Liquid Galaxy:
  • Cesium Maps - Smartrac had two major requirements for their data visualizations: show the complexity of the solution and global reach, while also making the map data offline wherever possible to avoid the risk of sketchy Internet connections at the convention center (a constant risk). For this, we deployed Cesium instead of Google Earth, as it allowed for a fully offline tileset that we could store locally on the server, as well as providing a rich data visualization set (we've shown other examples before).
  • Unity3D Models - Smartrac also wanted to show how their product tracking works in a typical retail store. Rather than trying to wire a whole solution during the short period for a convention, however, they made the call to visualize everything with Unity, a very popular 3D rendering engine. Given the multiple screens of the Liquid Galaxy, and our ability to adjust the view angle for each screen in the arch around the viewers, this Unity solution would be very immersive and able to tell their story quite well.
Smartrac showcased multiple scenes that incorporated 3D content with live data, labels superimposed on maps, and a multitude of supporting metrics. End Point developers worked on custom animation to show their tools in a engaging demo. During the convention, Smartrac had representatives leading attendees through the Liquid Galaxy presentations to show their data. Video of these presentations can be viewed below.



Smartrac?s Liquid Galaxy received positive feedback from everyone who saw it, exhibitors and attendees alike. Smartrac felt it was a great way to walk through their content, and attendees both enjoyed the content and were intrigued by the display on which they were seeing the content. Many attendees who had never seen Liquid Galaxy inquired about it.

If you?d like to learn more about Liquid Galaxy or new projects we are working on or having custom content developed, please visit our Liquid Galaxy website or contact us here.
Comments

published by noreply@blogger.com (Josh Lavin) on 2016-07-27 11:30:00 in the "Conference" category

In June, I traveled to Orlando, Florida to attend the event formerly known as Yet Another Perl Conference (or YAPC::NA), now known as The Perl Conference. This was my second time in a row to attend this conference (after my first attendance back in 2007).

Conferences are a great place to learn how others are using various tools, hear about new features, and interact with the community. If you are speaking, it's a great opportunity to brush up on your subject, which was true for me in the extreme, as I was able to give a talk on the PostgreSQL database, which I hadn't used in a long time (more on that later).

The conference name

The event organizers were able to license the name The Perl Conference from O'Reilly Media, as O'Reilly doesn't hold conferences by this name anymore. This name is now preferred over "YAPC" as it is more friendly to newcomers and more accurately describes the conference. More on the name change.

Notes from the conference

Over the three days of the conference, I was able to take in many talks. Here are some of my more interesting notes from various sessions:

  • MetaCPAN is the best way to browse and search for Perl modules. Anyone can help with development of this fine project, via their GitHub.
  • Ricardo Signes says "use subroutine signatures!" They are "experimental", but are around to stay.
  • Perl6 is written in Perl6 (and something called "Not Quite Perl"). This allows one to read the source to figure out how something is done. (There were many talks on Perl6, which is viewed as a different programming language, not a replacement for Perl5.)
  • jq is a command-line utility that can pretty-print JSON (non Perl, but nice!)
  • Ricardo Signes gave a talk on encoding that was over my head, but very interesting.
  • The presenter of Emacs as Perl IDE couldn't attend, so Damian Conway spoke on VIM as Perl IDE (photo above)

From John Anderson's talk:

  • Just say "no" to system Perl. Use plenv or the like.
  • There's a DuckDuckGo bang command for searching MetaCPAN: !cpan [module]
  • Use JSON::MaybeXS over the plain JSON module.
  • Use Moo for object-oriented programming in Perl, or Moose if you must.
  • Subscribe to Perl Weekly
  • Submit your module on PrePAN first, to receive feedback before posting to CPAN.

Lee Johnson gave a talk called Battling a legacy schema with DBIx::Class (video). Key takeaways:

  • When should I use DBIC?
  • Something that has grown organically could be considered "legacy," as it accumulates technical debt
  • With DBIC, you can start to manage that debt by adding relationships to your model, even if they aren't in your database
  • RapidApp's rdbic can help you visualize an existing database

D. Ruth Bavousett spoke on Perl::Critic, which is a tool for encouraging consistency. Basically, Perl::Critic looks at your source code, and makes suggestions for improvement, etc. These suggestions are known as "policies" and can be configured to enable or disable any of them, or to even write new policies. One suggestion was to create a Git hook to run the perlcritic command at the time code is committed to the source code repository (possibly using App::GitHooks). End Point has its own perlcritic configuration, which I have started trying to use more.

Logan Bell shared Strategies for leading a remote team. Some of the tools and techniques he uses include:

  • tmate for terminal sharing
  • HipChat, with a chat room just for complaining called "head to desk"
  • Holds one-on-one meetings every Monday for those he works with and directs
  • Has new team members work on-site with another team member their first week or so, to help understand personalities that don't often come across well over chat
  • Tries to have in-person meetings every quarter or at least twice a year, to bring the team together

My talk

Finally, my own talk was titled Stranger in a Strange Land: PostgreSQL for MySQL users (video). I hadn't used Postgres in about seven years, and I wanted to get re-acquainted with it, so naturally, I submitted a talk on it to spur myself into action!

In my talk, I covered:

  • the history of MySQL and Postgres
  • how to pronounce "PostgreSQL"
  • why one might be preferred over the other
  • how to convert an existing database to Postgres
  • and some tools and tips.

I enjoyed giving this talk, and hope others found it helpful. All in all, The Perl Conference was a great experience, and I hope to continue attending in the future!

All videos from this year's conference


Comments

published by noreply@blogger.com (Phin Jensen) on 2016-04-19 17:21:00 in the "Conference" category

Another talk from MountainWest RubyConf that I enjoyed was How to Build a Skyscraper by Ernie Miller. This talk was less technical and instead focused on teaching principles and ideas for software development by examining some of the history of skyscrapers.

Equitable Life Building

Constructed from 1868 to 1870 and considered by some to be the first skyscraper, the Equitable Life Building was, at 130 feet, the tallest building in the world at the time. An interesting problem arose when designing it: it was too tall for stairs. If a lawyer?s office was on the seventh floor of the building, he wouldn?t want his clients to walk up six flights of stairs to meet with him.

Elevators and hoisting systems existed at the time, but they had one fatal flaw: there were no safety systems if the rope broke or was cut. While working on converting a sawmill to a bed frame factory, a man named Elisha Otis had the idea for a system to stop an elevator if its rope is cut. He and his sons designed the system and implemented it at the factory. At the time, he didn?t think much of the design, and didn?t patent it or try to sell it.

Otis? invention became popular when he showcased it at the 1854 New York World?s Fair with a live demo. Otis stood in front of a large crowd on a platform and ordered the rope holding it to be cut. Instead of plummeting to the ground, the platform was caught by the safety system after falling only a few inches.

Having a way to safely and easily travel up and down many stories literally flipped the value propositions of skyscrapers upside down. Where lower floors were desired more because they were easy to access, higher floors are now more coveted, since they are easy to access but get the advantages that come with height, such as better air, light, and less noise. A solution that seems unremarkable to you might just change everything for others.

When the Equitable Life Building was first constructed, it was described as fireproof. Unfortunately, it didn?t work out quite that way. On January 9, 1912, the timekeeper for a cafe in the building started his day by lighting the gas in his office. Instead of disposing properly of the match, he distractedly threw it into the trashcan. Within 10 minutes, the entire office was engulfed in flame, which spread to the rest of the building, completely destroying it and killing six people.

Never underestimate the power of people to break what you build.

Home Insurance Building

The Home Insurance Building, constructed in 1884, was the first building to use a fireproof metal frame to bear the weight of the building, as opposed to using load-bearing masonry. The building was designed by William LeBaron Jenney, who was struck by inspiration when his wife placed a heavy book on top of a birdcage. From Wikipedia:

According to a popular story, one day he came home early and surprised his wife who was reading. She put her book down on top of a bird cage and ran to meet him. He strode across the room, lifted the book and dropped it back on the bird cage two or three times. Then, he exclaimed: ?It works! It works! Don?t you see? If this little cage can hold this heavy book, why can?t an iron or steel cage be the framework for a whole building??

With this idea, he was able to design and build the Home Insurance Building to be 10 stories and 138 feet tall while only weighing 1/3rd of what the same building in stone would weigh because he was able to Find inspiration from unexpected places.

Monadnock Building

The Monadnock Building was designed by Daniel Burnham and John Wellborn Root. Burnham preferred simple and functional designs and was known for his stinginess while Root was more artistically inclined and known for his detailed ornamentation on building designs. Despite their philosophical differences, they were one of the world?s most successful architectural firms.

One of the initial sketches (shown) for the building included Ancient Egyptian-inspired ornamentation with slight flaring at the top. Burnham didn?t like the design, as illustrated in a letter he wrote to the property manager:

My notion is to have no projecting surfaces or indentations, but to have everything flush .... So tall and narrow a building must have some ornament in so conspicuous a situation ... [but] projections mean dirt, nor do they add strength to the building ... one great nuisance [is] the lodgment of pigeons and sparrows.

While Root was on vacation, Burnham worked to re-design the building to be straight up-and-down with no ornamentation. When Root returned, he initially objected to the design but eventually embraced it, declaring that the heavy lines of the Egyptian pyramids captured his imagination. We can learn a simple lesson from this: Learn to embrace constraints.

When construction was completed in 1891, the building was a total of 17 stories (including the attic) and 215 feet tall. At the time, it was the tallest commercial structure in the world. It is also the tallest load-bearing brick building constructed. In fact, to support the weight of the entire building, the walls at the bottom had to be six feet (1.8 m) wide.

Because of the soft soil of Chicago and the weight of the building, it was designed to settle 8 inches into the ground. By 1905, it had settled that much and several inches more, which led to the reconstruction of the first floor. By 1948, it had settled 20 inches, making the entrance a step down from the street. If you only focus on profitability, don?t be surprised when you start sinking.

Fuller Flatiron Building

The Flatiron building, constructed in 1902, was also designed by Daniel Burnham, although Root had died of pneumonia during the construction of the Monadnock building. The Flatiron building presented an interesting problem because it was to be built on an odd triangular plot of land. In fact, the building was only 6 and a half feet wide at the tip, which obviously wouldn?t work with the load-bearing masonry design of the Monadnock building.

So the building was constructed using a steel-frame structure that would keep the walls to a practical size and allow them to fully utilize the plot of land. The space you have to work with should influence how you build and you should choose the right materials for the job.

During construction of the Flatiron building, New York locals called it ?Burnham?s Folly? and began to place bets on how far the debris would fall when a wind storm came and knocked it over. However, an engineer named Corydon Purdy had designed a steel bracing system that would protect the building from wind four times as strong as it would ever feel. During a 60-mph windstorm, tenants of the building claimed that they couldn?t feel the slightest vibration inside the building. This gives us another principle we can use: Testing makes it possible to be confident about what we build, even when others aren?t.

40 Wall Street v. Chrysler Building


40 Wall Street
Photo by C R, CC BY-SA 2.0

The stories of 40 Wall Street and the Chrysler Building start with two architects, William Van Alen and H. Craig Severance. Van Alen and Severance established a partnership together in 1911 which became very successful. However, as time went on, their personal differences caused strain in the relationship and they separated on unfriendly terms in 1924. Soon after the partnership ended, they found themselves to be in competition with one another. Severance was commissioned to design 40 Wall Street while Van Alen would be designing the Chrysler Building.

The Chrysler Building was initially announced in March of 1929, planned to be built 808 feet tall. Just a month later, Severance was one-upping Van Alen by announcing his design for the building, coming in at 840 feet. By October, Van Alen announced that the steel work of the Chrysler Building was finished, putting it as the tallest building in the world, over 850 feet tall. Severance wasn?t particularly worried, as he already had plans in motion to build higher. Even after reports came in that the Chrysler Building had a 60-foot flagpole at the top, Severance made more changes for 40 Wall Street to be taller than the Chrysler Building. These plans were enough for the press to announce that 40 Wall Street had won the race to build highest since construction of the Chrysler Building was too far along to be built any higher.


The Chrysler Building
Photo by Chris Parker, CC BY-ND 2.0

Unfortunately for Severance, the 60-foot flagpole wasn?t a flagpole at all. Instead, it was part of an 185-foot steel spire which Van Alen had designed and had built and shipped to the construction site in secret. On October 23rd, 1929, the pieces of the spire were hoisted to the top of the building and installed in just 90 minutes. The spire was initially mistaken for a crane, and it wasn?t until 4 days after it was installed that the spire was recognized as a permanent part of the building, making it the tallest in the world. When all was said and done, 40 Wall Street was came in at 927 feet, with a cost of $13,000,000, while the Chrysler Building finished at 1,046 feet and cost $14,000,000.

There are two morals we can learn from this story: There is opportunity for great work in places nobody is looking and big buildings are expensive, but big egos are even more so.

Empire State Building

The Empire State Building was built in just 13 months, from March 17, 1930, to April 11, 1931. Its primary architects were Richmond Shreve and William Lamb, who were part of the team assembled by Severance to design 40 Wall Street. They were joined by Arthur Harmon to form Shreve, Lamb, & Harmon. Lamb?s partnership with Shreve was not unlike that of Van Alen and Severance or Burnham and Root. Lamb was more artistic in his architecture, but he was also pragmatic, using his time and design constraints to shape the design and characteristics of the building.

Lamb completed the building drawings in just two weeks, designing from the top down, which was a very unusual method. When designing the building, Lamb made sure that even when he was making concessions, using the building would be a pleasant experience for those who mattered. Lamb was able to complete the design so quickly because he reused previous work, specifically the Reynolds Building in Winston-Salem, NC, and the Carew Tower in Cincinnati, Ohio.

In November of 1929, Al Smith, who commissioned the building as head of Empire State, Inc., announced that the company had purchased land next to the plot where the construction would start, in order to build higher. Shreve, Lamb, and Harmon were opposed to this idea since it would force tenants of the top floors to switch elevators on the way up, and they were focused on making the experience as pleasant as possible.

John Raskob, one of the main people financing the building, wanted the building to be taller. While looking at a small model of the building, he reportedly said ?What this building needs is a hat!? and proposed his idea of building a 200-foot mooring tower for a zeppelin at the top of the building, despite several problems such as high winds making the idea unfeasible. But Raskob felt that he had to build the tallest building in the world, despite all of the problems and the higher cost that a taller building would introduce because people can rationalize anything.

There are two more things we should note about the story of the Empire State building. First, despite the fact that it was designed top-to-bottom, it wasn?t built like that. No matter how a something is designed, it needs to be built from the bottom up. Second, the Empire State Building was a big accomplishment in architecture and construction, but at no small cost. Five people died during the construction of the building, and that may seem like a small number considering the scale of the project, but we should remember that no matter how important speed is, it?s not worth losing people over.

United Nations Headquarters

The United Nations Headquarters were constructed between 1948 and 1952. It wasn?t built to be particularly tall?less than half the height of the Empire State Building?but it came with its own set of problems. As you can see in the picture, the building had a lot of windows. The wide faces of the building are almost completely covered in windows. These windows offer great lighting and views, but when the sun shines on them, they generate a lot of heat, not unlike a greenhouse. Unless you?re building a greenhouse, you probably don?t want that. It doesn?t matter how pretty your building is if nobody wants to occupy it.

The solution to the problem was created years before by an engineer named Willis Carrier, who created an ?Apparatus for Treating Air? (now called an air conditioner) to keep the paper in a printing press from being wrinkled. By creating this air conditioner, Carrier didn?t just make something cool. He made something cool that everyone can use. Without it, buildings like then UNHQ could never have been built.

Willis (or Sears) Tower


Willis Tower, left

The Willis tower was built between 1970 and 1973. Fazlur Rahman Khan was hired as the structural engineer for the Willis Tower, which needed to be very tall in order to house all of the employees of Sears. A steel frame design wouldn?t work well in Chicago (also known as the Windy City) since they tend to bend and sway in heavy winds, which can cause discomfort for people on higher floors, even causing sea-sickness in some cases.

To solve the problem, Khan invented a ?bundled tube structure?, which put the structure of a building on the outside as a thin tube. Using the tube structure not only allowed Khan to build a higher tower, but it also increased floor space and cost less per unit area. But these innovations only came because Khan realized that the higher you build, the windier it gets.

Taipei 101

Taipei 101 was constructed from 1999 to 2004 near the Pacific Ring of Fire, which is the most seismically active part of the world. Earthquakes present very different problems from the wind since they affect a building at its base, instead of the top. Because of the location of the building it needed to be able to withstand both typhoon-force winds (up to 130 mph) and extreme earthquakes, which meant that it had to be designed to be both structurally strong and flexible.

To accomplish this, the building was constructed with high-performance steel, 36 columns, and 8 ?mega-columns? packed with concrete connected by outrigger trusses which acted similarly to rubber bands. During the construction of the building, Taipei was hit by a 6.8-magnitude earthquake which destroyed smaller buildings around the skyscraper, and even knocked cranes off of the incomplete building, but when the building was inspected it was found to have no structural damage. By being rigid where it has to be and flexible where it can afford to be, Taipei 101 is one of the most stable buildings ever constructed.

Of course, being flexible introduces the problem of discomfort for people in higher parts of the building. To solve this problem, Taipei 101 was built with a massive 728-ton (1,456,000 lb) tuned mass damper, which helps to offset the swaying of the building in strong winds. We can learn from this damper: When the winds pick up, it?s good to have someone (or something) at the top pulling for you.

Burj Khalifa

The newest and tallest building on our list, the Burj Khalifa was constructed from 2004 to 2009. With the Burj Khalifa, design problems came with incorporating adequate safety features. After the terrorist attacks of September 11, 2001, the problem of evacuation became more prominent in construction and design of skyscrapers. When it comes to an evacuation, stairs are basically the only way to go, and going down many flights of stairs can be as difficult as going up them?especially if the building is burning around you. The Burj Khalifa is nearly twice as tall as the old World Trade Center, and in the event of an emergency, walking down nearly half a mile of stairs won?t work out.

So how do the people in the Burj Khalifa get out in an emergency? Well, they don?t. Instead, the Burj Khalifa is designed with periodic safe rooms protected by reinforced concrete and fireproof sheeting that will protect people inside for up to two hours of during a fire. Each room has a dedicated supply of air, which is delivered through fire resistant pipes. These safe rooms are placed every 25 floors or so, because a safe space won?t do good if it can?t be reached.

You may know that the most common cause of death in a fire is actually smoke inhalation, not the fire itself. To deal with this, the Burj Khalifa has a network of high powered fans throughout which will blow clean air from outside into the building and keep the stairwells leading to the safe rooms clear of smoke. A very important part of this is pushing the smoke out of the way, eliminating the toxic elements.

It?s important to remember that these safe spaces, as useful as they may be, are not a substitute for rescue workers coming to aid the people trapped in the building. The safe rooms are only there to protect people who can?t help themselves until help can come. Because, after all, what we build is only important because of the people who use it.

Thanks to Ernie Miller for a great talk! The video is also available on YouTube.


Comments

published by noreply@blogger.com (Phin Jensen) on 2016-04-09 02:14:00 in the "Conference" category

On March 21 and 22, I had the opportunity to attend the 10th and final MountainWest RubyConf at the Rose Wagner Performing Arts Center in Salt Lake City.

One talk that I really enjoyed was Writing a Test Framework from Scratch by Ryan Davis, author of MiniTest. His goal was to teach the audience how MiniTest was created, by explaining the what, why and how of decisions made throughout the process. I learned a lot from the talk and took plenty of notes, so I?d like to share some of that.

The first thing a test framework needs is an assert function, which will simply check if some value or comparison is true. If it is, great, the test passed! If not, the test failed and an exception should be raised. Here is our first assert definition:

def assert test
  raise "Failed test" unless test
end

This function is the bare minimum you need to test an application, however, it won?t be easy or enjoyable to use. The first step to improve this is to make error messages more clear. This is what the current assert function will return for an error:

path/to/microtest.rb:2:in `assert': Failed test (RuntimeError)
        from test.rb:5:in `<main>'

To make this more readable, we can change the raise statement a bit:

def assert test
  raise RuntimeError, "Failed test", caller unless test
end

A failed assert will now throw this error, which does a better job of explaining where things went wrong:

test.rb:5:in `<main>': Failed test (RuntimeError)

Now we?re ready to create another assertion function, assert_equals. A test framework can have many different types of assertions, but when testing real applications, the vast majority will be tests for equality. Writing this assertion is easy:

def assert_equal a, b
  assert a == b
end

assert_equal 4, 2+2 # this will pass
assert_equal 5, 2+2 # this will raise an error

Great, right? Wrong! Unfortunately, the error messages have gone right back to being unhelpful:

path/to/microtest.rb:6:in `assert_equal': Failed test (RuntimeError)
        from test.rb:9:in `<main>'

There are a couple of things we can do to improve these error messages. First, we can filter the backtrace to make it more clear where the error is coming from. Second, we can add a parameter to assert which will take a custom message.

def assert test, msg = "Failed test"
  unless test then
    bt = caller.drop_while { |s| s =~ /#{__FILE__}/ }
    raise RuntimeError, msg, bt
  end
end

def assert_equal a, b
  assert a == b, "Failed assert_equal #{a} vs #{b}"
end

#=> test.rb:9:in `<main>': Failed assert_equal 5 vs 4 (RuntimeError)

This is much better! We?re ready to move on to another assert function, assert_in_delta. The way floating point numbers are represented, comparing them for equality won?t work. Instead, we will check to see that they are within a certain range of each other. We can do this with a simple calculation: (a-b).abs < ?, where ? is a very small number, like 0.001 (in reality, you will probably want a smaller delta than that). Here?s the function in Ruby:

def assert_in_delta a, b
  assert (a-b).abs <= 0.001, "Failed assert_in_delta #{a} vs #{b}"
end

assert_in_delta 0.0001, 0.0002 # pass
assert_in_delta 0.5000, 0.6000 # raise

We now have a solid base for our test framework. We have a few assertions and the ability to easily write more. Our next logical step would be to make a way to put our assertions into separate tests. Organizing these assertions allows us to refactor more easily, reuse code more effectively, avoid problems with conflicting tests, and run multiple tests at once.

To do this, we will wrap our assertions into functions and those function into classes, giving us two layers of compartmentalization.

class XTest
  def first_test
    a = 1
    assert_equal 1, a # passes
  end

  def second_test
    a = 1
    a += 1
    assert_equal 2, a # passes
  end

  def third_test
    a = 1
    assert_equal 1, a # passes
  end
end

That adds some structure, but how do we run the tests now? It?s not pretty:

XTest.new.first_test
XTest.new.second_test
XTest.new.third_test

Each test function needs to be called specifically, by name, which will become very tedious once there are 5, or 10, or 1000 tests. This is obviously not the best way to run tests. Ideally, the tests would run themselves, and to do that we?ll start by adding a method to run our tests to the class:

class XTest
  def run name
    send name
  end

  # ...test methods?
end

XTest.new.run :first_test
XTest.new.run :second_test
XTest.new.run :third_test

This is still very cumbersome, but it puts us in a better position, closer to our goal of automation. Using Class.public_instance_methods, we can find which methods are tests:

XTest.public_instance_methods
# => %w[some_method one_test two_test ...]

XTest.public_instance_methods.grep(/_test$/)
# => %w[one_test two_test red_test blue_test]

And run those automatically.

class XTest
  def self.run
    public_instance_methods.grep(/_test$/).each do |name|
      self.new.run name
    end
  end
  # def run...
  # ...test methods...
end

XTest.run # => All tests run

This is much better now, but we can still improve our code. If we try to make a new set of tests, called YTest for example, we would have to copy these run methods over. It would be better to move the run methods into a new abstract class, Test, and inherit from that.

class Test
  # ...run & assertions...
end 

class XTest < Test
  # ...test methods...
end 

XTest.run

This improves our code structure significantly. However, when we have multiple classes, we get that same tedious repetition:

XTest.run
YTest.run
ZTest.run # ...ugh

To solve this, we can have the Test class create a list of classes which inherit it. Then we can write a method in Test which will run all of those classes.

class Test
  TESTS = []

  def self.inherited x
    TESTS << x
  end 

  def self.run_all_tests
    TESTS.each do |klass|
      klass.run
    end 
  end 
  # ...self.run, run, and assertions...
end 

Test.run_all_tests # => We can use this instead of XTest.run; YTest.run; etc.

We?re really making progress now. The most important feature our framework is missing now is some way of reporting test success and failure. A common way to do this is to simply print a dot when a test successfully runs.

def self.run_all_tests
  TESTS.each do |klass|
    Klass.run
  end
  puts
end 

def self.run
  public_instance_methods.grep(/_test$/).each do |name|
    self.new.run name 
    print "."
  end 
end

Now, when we run the tests, it will look something like this:

% ruby test.rb
...

Indicating that we had three successful tests. But what happens if a test fails?

% ruby test.rb
.test.rb:20:in `test_assert_equal_bad': Failed assert_equal 5 vs 4 (RuntimeError)
  [...tons of blah blah...]
  from test.rb:30:in `<main>'

The very first error we come across will stop the entire test. Instead of the error being printed naturally, we can catch it and print the error message ourselves, letting other tests continue:

def self.run
  public_instance_methods.grep(/_test$/).each do |name|
    begin
      self.new.run name
      print "."
    rescue => e
      puts
      puts "Failure: #{self}##{name}: #{e.message}"
      puts "  #{e.backtrace.first}"
    end 
  end 
end

# Output

% ruby test.rb
.
Failure: Class#test_assert_equal_bad: Failed assert_equal 5 vs 4  
  test.rb:20:in `test_assert_equal'
.

That?s better, but it?s still ugly. We have failures interrupting the visual flow and getting in the way. We can improve on this. First, we should reexamine our code and try to organize it more sensibly.

def self.run
  public_instance_methods.grep(/_test$/).each do |name|
    begin
      self.new.run name
      print "."
    rescue => e
      puts
      puts "Failure: #{self}##{name}: #{e.message}"
      puts "  #{e.backtrace.first}"
    end
  end
end

Currently, this one function is doing 4 things:

  1. Line 2 is selecting and filtering tests.
  2. The begin clause is handling errors.
  3. `self.new.run name` runs the tests.
  4. The various puts and print statements print results.

This is too many responsibilities for one function. Test.run_all_tests should simply run classes, Test.run should run multiple tests, Test#run should run a single test, and result reporting should be done by... Something else. We?ll get back to that. The first thing we can do to improve this organization is to push the exception handling into the individual test running method.

class Test
  def run name
    send name
    false
  rescue => e
    e
  end

  def self.run
    public_instance_methods.grep(/_test$/).each do |name|
      e = self.new.run name
      
      unless e then
        print "."
      else
        puts
        puts "Failure: #{self}##{name}: #{e.message}"
        puts " #{e.backtrace.first}"
      end
    end
  end
end

This is a little better, but Test.run is still handling all the result reporting. To improve on that, we can move the reporting into another function, or better yet, its own class.

class Reporter
  def report e, name
    unless e then
      print "."
    else
      puts
      puts "Failure: #{self}##{name}: #{e.message}"
      puts " #{e.backtrace.first}"
    end
  end

  def done
    puts
  end
end

class Test
  def self.run_all_tests
    reporter = Reporter.new

    TESTS.each do |klass|
      klass.run reporter
    end
   
    reporter.done
  end
 
  def self.run reporter
    public_instance_methods.grep(/_test$/).each do |name|
      e = self.new.run name
      reporter.report e, name
    end
  end

  # ...
end

By creating this Reporter class, we move all IO out of the Test class. This is a big improvement, but there?s a problem with this class. It takes too many arguments to get the information it needs, and it?s not even getting everything it should have! See what happens when we run tests with Reporter:

.
Failure: ##test_assert_bad:
Failed test
 test.rb:9:in `test_assert_bad'
.
Failure: ##test_assert_equal_bad: Failed
assert_equal 5 vs 4
 test.rb:17:in `test_assert_equal_bad'
.
Failure: ##test_assert_in_delta_bad: Failed
assert_in_delta 0.5 vs 0.6
 test.rb:25:in `test_assert_in_delta_bad'

Instead of reporting what class has the failing test, it?s saying what reporter object is running it! The quickest way to fix this would be to simply add another argument to the report function, but that just creates a more tangled architecture. It would be better to make report take a single argument that contains all the information about the error. The first step to do this is to move the error object into a Test class attribute:

class Test
  # ...
  attr_accessor :failure
  
  def initialize
    self.failure = false
  end

  def run name
    send name
    false
  rescue => e
    self.failure = e
    self
  end
end

After moving the failure, we?re ready to get rid of the name parameter. We can do this by adding a name attribute to the Test class, like we did with the failure class:

class Test
  attr_accessor :name
  attr_accessor :failure
  def initialize name
    self.name = name
    self.failure = false
  end

  def self.run reporter
    public_instance_methods.grep(/_test$/).each do |name|
      e = self.new(name).run
      reporter.report e
    end
  end
  # ...
end

This new way of calling the Test#run method requires us to change that a little bit:

class Test
  def run
    send name
    false
  rescue => e
    self.failure = e
    self
  end
end

We can now make our Reporter class work with a single argument:

class Reporter
  def report e
    unless e then
      print "."
    else
      puts
      puts "Failure: #{e.class}##{e.name}: #{e.failure.message}"
      puts " #{e.failure.backtrace.first}"
    end
  end
end

We now have a much better Reporter class, and we can now turn our attention to a new problem in Test#run: it can return two completely different classes. false for a successful test and a Test object for a failure. Tests know if they fail, so we can know when they succeed without that false value.

class Test
  # ...
  attr_accessor :failure
  alias failure? failure
  # ...
  
  def run
    send name
  rescue => e
    self.failure = e
  ensure
    return self
  end
end

class Reporter
  def report e
    unless e.failure? then
      print "."
    else
      # ...
    end
  end
end

It would now be more appropriate for the argument to Reporter#report to be named result instead of e.

class Reporter
  def report result
    unless result.failure? then
      print "."
    else
      failure = result.failure
      puts
      puts "Failure: #{result.class}##{result.name}: #{failure.message}"
      puts " #{failure.backtrace.first}"
    end
  end
end

Now, we have one more step to improve reporting. As of right now, errors will be printed with the dots. This can make it difficult to get an overview of how many tests passed or failed. To fix this, we can move failure printing and progress reporting into two different sections. One will be an overview made up of dots and "F"s, and the other a detailed summary, for example:

...F..F..F

Failure: TestClass#test_method1: failure message 1
 test.rb:1:in `test_method1?

Failure: TestClass#test_method2: failure message 2
 test.rb:5:in `test_method2?

... and so on ...

To get this kind of output, we can store failures while running tests and modify the done function to print them at the end of the tests.

class Reporter
  attr_accessor :failures
  def initialize
    self.failures = []
  end

  def report result
    unless result.failure? then
      print "."
    else
      print "F"
      failures << result
    end
  end

  def done
    puts

    failures.each do |result|
      failure = result.failure
      puts
      puts "Failure: #{result.class}##{result.name}: #{failure.message}"
      puts " #{failure.backtrace.first}"
    end
  end
end

One last bit of polishing on the reporter class. We?ll rename the report method to << and the done method to summary.

class Reporter
  # ...
  def << result
    # ...
  end

  def summary
    # ...
  end
end

class Test
  def self.run_all_tests
    # ...
    reporter.summary
  end
 
  def self.run reporter
    public_instance_methods.grep(/_test$/).each do |name|
    reporter << self.new(name).run
  end
end

We?re almost done now! We?ve got one more step. Tests should be able to run in any order, so we want to make them run in a random order every time. This is as simple as adding `.shuffle` to our Test.run function, but we?ll make it a little more readable by moving the public_instance_methods.grep statement into a new function:

class Test
  def self.test_names
    public_instance_methods.grep(/_test$/)
  end
  
  def self.run reporter
    test_names.shuffle.each do |name|
      reporter << self.new(name).run
    end
  end
end

And we?re done! This may not be the most feature-rich test framework, but it?s very simple, small, well written, and gives us a base which is easy to extend and build on. The entire framework is only about 70 lines of code.

Thanks to Ryan Davis for an excellent talk! Also check out the code and slides from the talk.


Comments

published by noreply@blogger.com (Josh Lavin) on 2015-10-30 11:30:00 in the "Conference" category

In my last post, I shared about the Training Days from the Perl Dancer 2015 conference, in Vienna, Austria. This post will cover the two days of the conference itself.

While there were several wonderful talks, Gert van der Spoel did a great job of writing recaps of all of them (Day 1, Day 2), so here I'll cover the ones that stood out most to me.

Day One



Dancer Conference, by Alexis Sukrieh (used with permission)

Sawyer X spoke on the State of Dancer. One thing mentioned, which came up again later in the conference, was: Make the effort, move to Dancer 2! Dancer 1 is frozen. There have been some recent changes to Dancer:

  • Middlewares for static files, so these are handled outside of Dancer
  • New Hash::MultiValue parameter keywords (route_parameters, query_parameters, body_parameters; covered in my earlier post)
  • Delayed responses (asynchronous) with delayed keyword:
    • Runs on the server after the request has finished.
    • Streaming is also asynchronous, feeding the user chunks of data at a time.

Items coming soon to Dancer may include: Web Sockets (supported in Plack), per-route serialization (currently enabling a serializer such as JSON affects the entire app — later on, Russell released a module for this, which may make it back into the core), Dancer2::XS, and critic/linter policies.

Thomas Klausner shared about OAuth & Microservices. Microservices are a good tool to manage complexity, but you might want to aim for "monolith first", according to Martin Fowler, and only later break up your app into microservices. In the old days, we had "fat" back-ends, which did everything and delivered the results to a browser. Now, we have "fat" front-ends, which take info from a back-end and massage it for display. One advantage of the microservice way of thinking is that mobile devices (or even third parties) can access the same APIs as your front-end website.

OAuth allows a user to login at your site, using their credentials from another site (such as Facebook or Google), so they don't need a password for your site itself. This typically happens via JavaScript and cookies. However, to make your back-end "stateless", you could use JSON Web Tokens (JWT). Thomas showed some examples of all this in action, using the OX Perl module.

One thing I found interesting that Thomas mentioned: Plack middleware is the correct place to implement most of the generic part of a web app. The framework is the wrong part. I think this mindset goes along with Sawyer's comments about Web App + App in the Training Days.

Mickey Nasriachi shared his development on PONAPI, which implements the JSON API specification in Perl. The JSON API spec is a standard for creating APIs. It essentially absolves you from having to make decisions about how you should structure your API.



Panorama from the south tower of St. Stephen's cathedral, by this author

Gert presented on Social Logins & eCommerce. This built on the earlier OAuth talk by Thomas. Here are some of the pros/cons to social login which Gert presented:

  • Pros - customer:
    • Alleviates "password fatigue"
    • Convenience
    • Brand familiarity (with the social login provider)
  • Pros - eCommerce website:
    • Expected customer retention
    • Expected increase in sales
    • Better target customers
    • "Plug & Play" (if you pay) — some services exist to make it simple to integrate social logins, where you just integrate with them, and then you are effectively integrated with whatever social login providers they support. These include Janrain and LoginRadius
  • Cons - customer:
    • Privacy concerns (sharing their social identity with your site)
    • Security concerns (if their social account is hacked, so are all their accounts where they have used their social login)
    • Confusion (especially on how to leave a site)
    • Usefulness (no address details are provided by the social provider in the standard scope, so the customer still has to enter extra details on your site)
    • Social account hostages (if you've used your social account to login elsewhere, you are reluctant to shut down your social account)
  • Cons - eCommerce website:
    • Legal implications
    • Implementation hurdles
    • Usefulness
    • Provider problem is your problem (e.g., if the social login provider goes down, all your customers who use it to login are unable to login to your site)
    • Brand association (maybe you don't want your site associated with certain social sites)
  • Cons - social provider:
    • ???

?imun Kod?oman spoke on Dancer + Meteor = mobile app. Meteor is a JavaScript framework for both server-side and client-side. It seems one of the most interesting aspects is you can use Meteor with the Android or iOS SDK to auto-generate a true mobile app, which has many more advantages than a simple HTML "app" created with PhoneGap. ?imun is using Dancer as a back-end for Meteor, because the server-side Meteor aspect is still new and unstable, and is also dependent on MongoDB, which cannot be used for everything.

End Point's own Sam Batschelet shared his work on Space Camp, a new container-based setup for development environments. This pulls together several pieces, including CoreOS, systemd-nspawn, and etcd to provide a futuristic version of DevCamps.

Day Two



Conference goers, by Sam (used with permission)

Andrew Baerg spoke on Taming the 1000-lb Gorilla that is Interchange 5. He shared how they have endeavored to manage their Interchange development in more modern ways, such as using unit tests and DBIC. One item I found especially interesting was the use of DBIx::Class::Fixtures to allow saving bits of information from a database to keep with a test. This is helpful when you have a bug from some database entry which you want to fix and ensure stays fixed, as databases can change over time, and without a "fixture" your test would not be able to run.

Russell Jenkins shared HowTo Contribute to Dancer 2. He went over the use of Git, including such helpful commands and tips as:

  • git status --short --branch
  • Write good commit messages: one line summary, less than 50 characters; longer description, wrapped to 72 characters; refer to and/or close issues
  • Work in a branch (you shall not commit to master)
  • "But I committed to master" --> branch and reset
  • git log --oneline --since=2.weeks
  • git add --fixup <SHA1 hash>
  • The use of branches named with "feature/whatever" or "bugfix/whatever" can be helpful (this is Russell's convention)

There are several Dancer 2 issues tagged "beginner suitable", so it is easy for nearly anyone to contribute. The Dancer website is also on GitHub. You can even make simple edits directly in GitHub!

It was great to have the author of Dancer, Alexis Sukrieh, in attendance. He shared his original vision for Dancer, which filled a gap in the Perl ecosystem back in 2009. The goal for Dancer was to create a DSL (Domain-specific language) to provide a very simple way to develop web applications. The DSL provides "keywords" for use in the Dancer app, which are specific to Dancer (basically extra functionality for Perl). One of the core aspects of keeping it simple was to avoid the use of $self (a standby of object-oriented Perl, one of the things that you just "have to do", typically).

Alexis mentioned that Dancer 1 is frozen — Dancer 2 full-speed ahead! He also shared some of his learnings along the way:

  • Fill a gap (define clearly the problem, present your solution)
  • Stick to your vision
  • Code is not enough (opensource needs attention; marketing matters)
  • Meet in person (collaboration is hard; online collaboration is very hard)
  • Kill the ego — you are not your code

While at the conference, Alexis even wrote a Dancer2 plugin, Dancer2::Plugin::ProbabilityRoute, which allows you to do A/B Testing in your Dancer app. (Another similar plugin is Dancer2::Plugin::Sixpack.)

Also check out Alexis' recap.

Finally, I was privileged to speak as well, on AngularJS & Dancer for Modern Web Development. Since this post is already pretty long, I'll save the details for another post.

Summary

In summary, the Perl Dancer conference was a great time of learning and building community. If I had to wrap it all up in one insight, it would be: Web App + App — that is, your application should be a compilation of: Plack middleware, Web App (Dancer), and App (Perl classes and methods).


Comments

published by noreply@blogger.com (Josh Lavin) on 2015-10-28 11:30:00 in the "Conference" category

I just returned from the Perl Dancer Conference, held in Vienna, Austria. It was a jam-packed schedule of two days of training and two conference days, with five of the nine Dancer core developers in attendance.

[image of Vienna]

Kohlmarkt street, Wien, by this author

If you aren't familiar with Perl Dancer, it is a modern framework for Perl for building web applications. Dancer1 originated as a port of Ruby's Sinatra project, but has officially been replaced with a rewrite called Dancer2, based on Moo, with Dancer1 being frozen and only receiving security fixes. The Interchange 5 e-commerce package is gradually being replaced by Dancer plugins.

Day 1 began with a training on Dancer2 by Sawyer X and Mickey Nasriachi, two Dancer core devs. During the training, the attendees worked on adding functionality to a sample Dancer app. Some of my takeaways from the training:

  • Think of your app as a Dancer Web App plus an App. These should ideally be two separate things, where the Dancer web app provides the URL routes for interaction with your App.
  • The lib directory contains all of your application. The recommendation for large productions is to separate your app into separate namespaces and classes. Some folks use a routes directory just for routing code, with lib reserved for the App itself.
  • It is recommended to add an empty .dancer file to your app's directory, which indicates that this is a Dancer app (other Perl frameworks do similarly).
  • When running your Dancer app in development, you can use plackup -R lib bin/app.psgi which will restart the app automatically whenever something changes in lib.
  • Dancer handles all the standard HTTP verbs, except note that we must use del, not delete, as delete conflicts with the Perl keyword.
  • There are new keywords for retrieving parameters in your routes. Whereas before we only had param or params, it is now recommended to use:
    • route_parameters,
    • query_parameters, or
    • body_parameters
    • all of which can be used with ->get('foo') which is always a single scalar, or ->get_all('foo') which is always a list.
    • These allow you to specify which area you want to retrieve parameters from, instead of being unsure which param you are getting, if identical names are used in multiple areas.

Day 2 was DBIx::Class training, led by Stefan Hornburg and Peter Mottram, with assistance from Peter Rabbitson, the DBIx::Class maintainer.

DBIx::Class (a.k.a. DBIC) is an Object Relational Mapper for Perl. It exists to provide a standard, object-oriented way to deal with SQL queries. I am new to DBIC, and it was a lot to take in, but at least one advantage I could see was helping a project be able to change database back-ends, without having to rewrite code (cue PostgreSQL vs MySQL arguments).

I took copious notes, but it seems that the true learning takes place only as one begins to implement and experiment. Without going into too much detail, some of my notes included:

  • Existing projects can use dbicdump to quickly get a DBIC schema from an existing database, which can be modified afterwards. For a new project, it is recommended to write the schema first.
  • DBIC allows you to place business logic in your application (not your web application), so it is easier to test (once again, the recurring theme of Web App + App).
  • The ResultSet is a representation of a query before it happens. On any ResultSet you can call ->as_query to find the actual SQL that is to be executed.
  • DBIx::Class::Schema::Config provides credential management for DBIC, and allows you to move your DSN/username/password out of your code, which is especially helpful if you use Git or a public GitHub.
  • DBIC is all about relationships (belongs_to, has_many, might_have, and has_one). many_to_many is not a relationship per se but a convenience.
  • DBIx::Class::Candy provides prettier, more modern metadata, but cannot currently be generated by dbicdump.
  • For deployment or migration, two helpful tools are Sqitch and DBIx::Class::DeploymentHandler. Sqitch is better for raw SQL, while DeploymentHandler is for DBIC-managed databases. These provide easy ways to migrate, deploy, upgrade, or downgrade a database.
  • Finally, RapidApp can read a database file or DBIC schema and provide a nice web interface for interacting with a database. As long as you define your columns properly, RapidApp can generate image fields, rich-text editors, date-pickers, etc.

The training days were truly like drinking from a firehose, with so much good information. I am looking forward to putting this into practice!

Stay tuned for my next blog post on the Conference Days.


Comments

published by noreply@blogger.com (Selvakumar Arumugam) on 2015-09-30 16:00:00 in the "Conference" category

DevOpsIndia 2015 was held at The Royal Orchid in Bengaluru on Sep 12-13, 2015. After saying hello to a few familiar faces who I often see at the conferences, I collected some goodies and entered into the hall. Everything was set up for the talks. Niranjan Paranjape, one of the organizers, was giving the introduction and overview of the conference.

Justin Arbuckle from Chef gave a wonderful keynote talk about the ?Hedgehog Concept? and spoke more about the importance of consistency, scale and velocity in software development.


In addition, he quoted "A small team with generalists who have a specialization, deliver far more than a large team of single skilled people."

A talk on ?DevOps of Big Data infrastructure at scale? was given by Rajat Venkatesh from Qubole. He explained the architecture of Qubole Data Service (QDS), which helps to autoscale the Hadoop cluster. In short, scale up happens based on the data from Hadoop Job Tracker about the number of jobs running and time to complete the jobs. Scale down will be done by decommissioning the node, and the server will be chosen by which is reaching the boundary of an hour. This is because most of the cloud service providers charge for an hour regardless of whether the usage is 1 minute or 59 minutes.

Vishal Uderani, a DevOps guy from WebEngage, presented ?Automating AWS infrastructure and code deployment using Ansible.? He shared the issues facing the environments like task failure due to ssh timeout on executing a giant task using Ansible and solved by monitoring the task after triggering the task to get out of the system immediately. Integrating Rundeck with Ansible is an alternative for enterprise Ansible Tower. He also stated the following reasons for using Ansible:

  • Good learning curve
  • No agents will be running on the client side, which avoids having to monitor the agents at client nodes
  • Great deployment tool

Vipul Sharma from CodeIgnition stated the importance of Resilience Testing. The application should be tested periodically to be tough and strong enough to handle any kind of load. He said Simian Army can be used to create the problems in environments and then resolving them to make them flawless. Simian Army can used to improve application using security monkey, chaos monkey, janitor monkey, etc? Also ?Friday Failure? is a good method to identify the problem and improve the application.

Avi Cavale from Shippable gave an awesome talk on "Modern DevOps with Docker". He talk started with ?What is the first question that arises during an outage ? ... What changed ??After fixing these issues, the next question will be ?Who made the change?? Both questions are bad for the business. Change is the root cause of all outage but business requires changes. In his own words, DevOps is a culture of embracing change. Along with that he explained the zero downtime ways to deploy the changes using a container.

He said DevOps is a culture, make it FACT(F-Frictionless, A-Agile, C-Continuous and T-Transparency).

Rahul Mahale from SecureDB gave a demo on Terraform, a tool for build and orchestration in Cloud. It features ?Infrastructure as Code? and also provides an option to generate the diagrams and graphs of the present infrastructure.

Shobhit and Sankalp from CodeIgnition shared their experience on solving network based issues. Instead of whitelisting the user's location every time manually to provide the access to systems, they created a VPN to enable access only to users, not locations. They have resolved two more additional kind of issues by adding Router to bind two networks using FIP.  Another issue is that they need to whitelist to access third party services from containers, but it was hard to whitelist all the containers. Therefore, they created and whitelisted VMs and containers accessed the third party services through VMs.

Ankur Trivedi from Xebia Labs spoke about the ?Open Containers Initiative? project. He explained the evolution of the containers (Docker - 2013 & Rocket - 2014). The various distributions of containers are compared based on the Packing, Identity, Distribution and Runtime capabilities. Open Containers is supported by the community and various companies who are doing extensive work on containers in order to standardize them.

Vamsee Kanala, a DevOps consultant, presented a talk on ?Docker Networking - A Primer?. He spoke about Bridge networking, Host Networking, Mapped container networking and None (Self Managed) with dockers. The communications between the containers can happen through:

  • Port mapping
  • Link
  • Docker composing (programmatically)

In addition, he explained the tools which feature the clustering of containers, and listed the tools that have their own way of clustering and advantages:

  • Kubernetes
  • Mesos
  • Docker Swarm

Aditya Patawari from BrowserStack gave a demo on ?Using Kubernetes to Build Fault Tolerant Container Clusters?. Kubernetes has a feature called ?Replication Controllers,? which helps to maintain number of pods running at any time. ?Kubernetes Services? defines a policy to enable access among the pods which provisions the pods as micro services.

Arjun Shenoy from LinkedIn introduced a tool called ?SIMOORG.? The tool was developed at LinkedIn and does the failure induction in a cluster for testing the stability of the code. It is a components-based Open Source framework and few components are replaceable with external ones.

Dharmesh Kakadia, a researcher from Microsoft, gave a wonderful talk on ?Mesos is the Linux?. He started with a wonderful explanation on Micro services (relating with linux commands, each command is a micro service) which is simplest, independently updatable, runnable and deployable. Mesos is a ?Data Center Kernel? which takes care of scalability, fault tolerance, load balance, etc? in Data Center.

At the end, I got a chance to do some hands-on things on Docker and played with some of its features. It was a wonderful conference to learn more about configuration management and the containers world.


Comments

published by noreply@blogger.com (Josh Lavin) on 2015-09-18 11:30:00 in the "Conference" category

In June, I attended the Yet Another Perl Conference (North America), held in Salt Lake City, Utah. I was able to take in a training day on Moose, as well as the full 3-day conference.

The Moose Master Class (slides and exercises here) was taught by Dave Rolsky (a Moose core developer), and was a full day of hands-on training and exercises in the Moose object-oriented system for Perl 5. I've been experimenting a bit this year with the related project Moo (essentially the best two-thirds of Moose, with quicker startup), and most of the concepts carry over, with just slight differences.

Moose and Moo allow the modern Perl developer to quickly write OO Perl code, saving quite a bit of work from the older "classic" methods of writing OO Perl. Some of the highlights of the Moose class include:

  • Sub-classing is discouraged; this is better done using Roles
  • Moose eliminates more typing; more typing can often equal more bugs
  • Using namespace::autoclean at the top is a best practice, as it cleans up after Moose
  • Roles are what a class does, not what it is. Roles add functionality.
  • Use types with MooseX::Types or Type::Tiny (for Moo)
  • Attributes can be objects (see slide 231)
Additional helpful resources for OO Perl and Moo.

At the YAPC::NA conference days, I attended all joint sessions, and breakout sessions that piqued my interest. Here are some of the things I noted:

  • The author of Form::Diva gave a lightning talk (approx. 5 minutes) about this module, which allows easier HTML form creation. I was able to chat with the author during a conference mixer, and the next time I need a long HTML form, I will be giving this a try.
  • One lightning talk presenter suggested making comments stand out, by altering your editor's code highlight colors. Comments are often muted, but making them more noticeable helps developers, as comments are often important guides to the code.
  • plenv (which allows one to install multiple versions of Perl) can remember which Perl version you want for a certain directory (plenv local)
  • pinto is useful for managing modules for a project
  • Sawyer did a talk on web scraping in which he demonstrated the use of Web::Query, which provides jQuery-like syntax for finding elements in the page you wish to scrape. There are many tools for web scraping, but this one seems easy to use, if you know jQuery.
  • DBIC's "deploy" will create new tables in a database, based on your schema. DBIx::Class::Fixtures can grab certain data into files for tests to use, so you can keep data around to ensure a bug is still fixed.
The presenter of What is this "testing" people keep talking about? did a great job researching a topic which he knew nothing about until after his talk was accepted. If there is ever a good way to learn something, it's teaching it! Slides are here.

The talk on Docker (slides) was interesting. Highlights I noted: use busybox, then install Perl on top of busybox (you can run Plack from this Perl); Gentoo is easy to dockerize, as about half the size of Ubuntu; Perl Dockerfiles; build Perl on a local system, then copy to Docker image, in Docker file.

I attended some talks on the long-awaited Perl 6, which is apparently to be released by the end of this year. While I'm not sure how practical Perl 6 will be for a while, one of the most interesting topics was that Perl 6 knows how to do math, such as: solve for "x": x = 7 / 2. Perl 6 gets this "right", as far as humans are concerned. It was interesting that many in attendance did not feel the answer should be "3.5", due to what I suspect is prolonged exposure to how computers do math.

One talk not related to Perl was Scrum for One (video), which discussed how to use the principles of Scrum in one's daily life. Helpful hints included thinking of your tasks in the User Story format: "as a $Person, I would like $Thing, so that $Accomplishment"; leave murky stories on the backlog, as you must know what "done" looks like; the current tasks should include things doable in the next week — this prevents you from worrying about all tasks in your list. Personally, I've started using Trello boards to implement this, such as: Done, Doing, ToDo, Later.

Finally, while a great technical conference, YAPC's biggest strength is bringing together the Perl community. I found this evident myself, as I had the opportunity to meet another attendee from my city. We were introduced at the conference, not knowing each other previously. When you have two Perl developers in the same city, it is time to resurrect your local Perl Mongers group, which is what we did!


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-04-28 12:39:00 in the "Conference" category

This blog post is really for myself. Because I had the unique experience of bringing a baby to a conference, I made an extra effort to talk to other attendees about what sessions shouldn't be missed. Here are the top takeaways from the conference that I recommend (in no particular order):

Right now, the videos are all unedited from the Confreaks live stream of the keynote/main room, and I'll update as the remaining videos become available.

A Message From the Sponsors

My husband: My conferences never have giveaways that cool.
Me: That's because you work in academia.

You can read the full list of sponsors here, but I wanted to comment further:

Hired was the diamond sponsor this year and they ran a ping pong tournament. The winner received $2000, and runners up received $1000, $500, $250. The final match was heavily attended and competitive! Practice up for next year?

Engine Yard also put on a really fun scavenger hunt using Scavify. Since I couldn't attend the multiple parties going on at night, this was a really fun way to participate and socialize.


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-04-27 13:02:00 in the "Conference" category

Last week, I brought my 4 month old to RailsConf. In a game-day decision, rather than drag a two year old and husband along on the ~5 hour drive and send the dogs to boarding, we decided it would ultimately be easier on everyone (except maybe me) if I attended the conference with the baby, especially since a good amount of the conference would be live-streamed.


Daily morning photos at the conference.

While I was there, I was asked often how it was bringing a baby to a conference, so I decided to write a blog post. As with all parenting advice, the circumstances are a strong factor in how the experience turned out. RailsConf is a casual three-day multi-track tech conference with many breaks and social events – it's as much about socialization as it is about technical know-how. This is not my first baby and not my first time at RailsConf, so I had some idea of what I might be getting into. Minus a few minor squeaks, baby Skardal was sleeping or sitting happily throughout the conference.

Here's what I [qualitatively] perceived to be the reaction of others attending the conference to baby Skardal:

In list form:

  • Didn't Notice: Probably about 50% didn't notice I had a baby, especially when she was sleeping soundly in the carrier.
  • Stares: Around 50% may have stared. See below for more.
  • Joke: A really small percentage of people made some variation of the joke "Starting her early, eh?"
  • Conversation: An equally small percentage of people started a conversation with me, which often led to more technical talk.

Here are some reasons I imagined behind the staring:

  • That baby is very cute (I'm not biased at all!)
  • Shock
  • Wonder if day care is provided (No, it wasn't. But with a 4 month old who hasn't been in day care yet, I probably wouldn't have used it.)
  • Too hungover to politely not stare

Pros & Cons

And here is how I felt after the conference, regarding pros and cons on bringing the baby:

Pros
  • A baby is a good conversation starter, which is beneficial in a generally introverted crowd.
  • I realized there are helpful & nice people in the world who offered to help plenty of times.
  • The baby was happiest staying with me.
Cons
  • Because children were the focus of many conversations, I missed out on talking shop a bit.
  • It's tiring, but in the same way that all parenting is.
  • I couldn't participate in all of the social/evening activities of the conference.
  • Staring generally makes me feel uncomfortable.

Tips

And finally, some tips:

  • Plan ahead:
    • Review the sessions in advance and pick out ones you want to attend because you may not have time to do that on the fly.
    • Walk (or travel) the route from your hotel to the conference so you know how long it will take and if there will be challenges.
  • Be agile and adapt. Most parents are already probably doing this with a 4 month old.
  • Manage your expectations:
    • Expect the conference with a baby to be tiring & challenging at times.
    • Expect stares.
    • Expect you won't make it to every session you want, so make a point of talking to others to find out their favorite sessions.
  • If not provided, ask conference organizers for access to a nursing or stashing room.
  • Bring baby gear options: carrier, stroller, bouncy seat, etc.
  • Research food delivery options ahead of time.
  • Order foods that are easy to eat with one hand. Again, another skill parents of a 4 month old may have developed.
  • Sit or stand in the back of talks.

While in these circumstances I think we made the right decision, I look forward to attending more conferences sans-children.


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-04-23 23:40:00 in the "Conference" category

Today, RailsConf concluded here in Atlanta. The day started with the reveal of this year's Ruby Heroes, followed by a Rails Core panel. Watch the video here.

On Trailblazer

One interesting talk I attended was See You on The Trail by Nick Sutterer, sponsored by Engine Yard, a talk where he introduced Trailblazer. Trailblazer is an abstraction layer on top of Rails that introduces a few additional layers that build on the MVC convention. I appreciated several of the arguments he made during the talk:

  • MVC is a simple level of abstraction that allows developers to get up and running efficiently. The problem is that everything goes into those three buckets, and as the application gets more complex, the simplified structure of MVC doesn't answer on how to organize logic like authorization and validation.
  • Nick made the argument that DHH is wrong when says that microservices are the answer to troublesome monolithic apps. Nick's answer is a more structured, organized OO application.
  • Rails devs often say "Rails is simple", but Nick made the argument that Rails is easy (subjective) but not simple (objective). While Rails follows convention with the idea that transitioning between developers on a project should be easy if conventions have been followed, in actuality, there is still so much interpretation into how and where to organize business logic for a complex Rails application that makes transition between developers less straightforward and not simple.
  • Complex Rails tends to include fat models (as opposed to fat controllers), and views with [not-so-helpful] helpers and excessive rendering logic.
  • Rails doesn't introduce convention on where dispatch, authorization, validation, business logic, and rendering logic should live.
  • Trailblazer, an open source framework, introduces a new abstraction layer to introduce conventions for some of these steps. It includes Cells to encapsulate the OO approach in views, and Operations to deserialize and validate without touching the model.

There was a Trailblazer demo during the talk, but as I mentioned above, the takeaway for me here is that rather than focus on the specific technical implementation at this point, this buzzworthy topic of microservices is more about good code organization and conventions for increasingly complex applications, that encourages readability and maintenance on the development side.

I went to a handful of other decent talks today and will include a summary of my RailsConf experience sharing links to popular talks on this blog.


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-04-22 23:52:00 in the "Conference" category

It's day 2 of RailsConf 2015 in Atlanta! I made it through day 1!

The day started with Aaron Patterson's keynote (watch it here). He covered features he's been working on including auto parallel testing, cache compiled views, integration test performance, and "soup to nuts" performance. Aaron is always good at starting his talk with self-deprecation and humor followed by sharing his extensive performance work supported by lots of numbers.

On Hiring

One talk I attended today was "Why We're Bad At Hiring (And How To Fix It)" by @kerrizor of Living Social (slides here, video here). I was originally planning on attending a different talk, but a fellow conference attendee suggested this one. A few gems (not Ruby gems) from this talk were:

  • Imagine your company as a small terrarium. If you are a very small team, hiring one person can drastically affect the environment, while hiring one person will be less influential for larger companies. I liked this analogy.
  • Stay away from monocultures (e.g. the banana monoculture) and avoid hiring employees just like you.
  • Understand how your hiring process may bias you to reject specific candidates. For example, requiring a GitHub account may bias reject applicants that are working with a company that can't share code (e.g. security clearance required). Another example: requiring open source contributions may bias reject candidates with very little free time outside of their current job.
  • The interview process should be well organized and well communicated. Organization and communication demonstrate confidence in the hiring process.
  • Hypothetical scenarios or questions are not a good idea. I've been a believer of this after reading some of Malcolm Gladwell's books where he discusses how circumstances are such a strong influence of behavior.
  • Actionable examples that are better than hypothetical scenarios include:
    1. ask an applicant to plan an app out (e.g. let's plan out strategy for an app that does x)
    2. ask an applicant to pair program with a senior developer
    3. ask the applicant to give a lightning talk or short presentation to demonstrate communication skills
  • After a rejected interview, think about what specifically might change your mind about the candidate.
  • Also communicate the reapplication process.
  • Improve your process by measuring with the goal to prevent false negatives. One actionable item here is to keep tabs on people – are there any developers you didn't hire that went on to become very successful & what did you miss?
  • Read this book.

Interview practices that Kerri doesn't recommend include looking at GPA/SAT/ACT scores, requiring a Pull request to apply, speed interviews, giving puzzle questions, whiteboard coding & fizzbuzz.

While I'm not extremely involved in the hiring processes for End Point, I am interested in the topic of growing talent within a company. The notes specifically related to identifying your own hiring biases was compelling.

Testing

I also attended a few talks on testing. My favorite little gem from one of these talks was the idea that when writing tests, one should try to balance between readability, maintainability, and performance, see:

Eduardo Gutierrez gave a talk on Capybara where he went through explicit examples of balancing maintainability, readability, and performance in Capybara. I'll update this post to include links to all these talks when they become available. Here are the videos & slides for these talks:


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-04-22 00:42:00 in the "Conference" category

I'm here in Atlanta for my sixth RailsConf! RailsConf has always been a conference I enjoy attending because it includes a wide spectrum of talks and people. The days are long, but rich with nitty gritty tech details, socializing, and broader topics such as the social aspects of coding. Today's keynote started with DHH discussing the shift towards microservices to support different types of integrated systems, and then transitioned to cover code snippets of what's to come in Rails 5, to be released this year. Watch the keynote here.

Open Source & Being a Hero

One of the talks I was really looking forward to attending was "Don't Be a Hero - Sustainable Open source Dev" by Lillie Chilen (slides here, video here), because of my involvement in open source (with Piggybak, RailsAdminImport, Annotator and Spree, another Ruby on Rails ecommerce framework). In the case of RailsAdminImport, I found a need for a plugin to RailsAdmin, developed it for a client, and then released it into the open source with no plans on maintaining a community. I've watched as it's been forked by a handful of users who were in need of the same functionality, but I most recently gave another developer commit & Rubygems access since I have historically been a horrible maintainer of the project. I can leverage some of Lillie's advice to help build a group of contributors & maintainers for this project since it's not something I plan to put a ton of time into.

With Piggybak, while I haven't developed any new features for it in a while, I released it into the open source world with the intention of spending time maintaining a community after being involved in the Spree community. Piggybak was most recently upgraded to Rails 4.2.

Lillie's talk covered actionable items you can do if you find yourself in a hero role in an open source project. She explained that while there are some cool things about being a hero, or a single maintainer on a project, ultimately you are also the single point of failure of the project and your users are in trouble if you get eaten by a dinosaur (or get hit by a bus).

Here are some of these actionable items to recovery from hero status:

  1. Start with the documentation on how to get the app running, how to run tests, and how to contribute. Include comments on your workflow, such as if you like squashed commits or how documentation should look.
  2. Write down everything you do as a project maintainer and try to delegate. You might not realize all the little things you do for the project until you write them down.
  3. Try to respond quickly to requests for code reviews (or pull requests). Lillie referenced a study that mentioned if a potential contributor receives a code review within 48 hours, they are much more likely to come back, but if they don't hear back within 7 days, there is little chance they will continue to be involved.
  4. Recruit collaborators by targeted outreach. There will be a different audience of collaborators if you open source tool is an app versus a library.
  5. Manage your own expectations for contributors. Understand the motivations of contributors and try to figure out ways to encourage specific deliverables.
  6. Have regular retrospectives to analyze what's working and what's not, and encourage introspection.

While Lillie also covered several things that you can do as a contributor, I liked the focus on actionable tasks here for owners of projects. The ultimate goal should be to find other collaborators, grow a team, and figure out what you can do to encourage people to progress in the funnel and transition from user to bug fixer to contributor to maintainer. I can certainly relate to being the single maintainer on an open source project (acting as a silo), with no clear plan as to how to grow the community.

Other Hot Topics

A couple of other hot topics that came up in a few talks were microservices and Docker. I find there are hot topics like this at every RailsConf, so if the trend continues, I'll dig deeper into these topics.

What Did I Miss?

I always like to ask what talks people found memorable throughout the day in case I want to look back at them later. Below are a few from today. I'd like to revisit these later & I'll update to include the slides when I find them.


Comments

published by noreply@blogger.com (Steph Skardal) on 2015-04-17 13:37:00 in the "Conference" category

Next week, I'm headed to my 6th RailsConf 2015 in Atlanta, with the whole family in tow:


The gang. Note: Dogs will not be attending conference.

This will be a new experience since my husband will be juggling two kids while I attend the daily sessions. So it makes sense going into the conference fairly organized to aid in the kid juggling, right? So I've picked out a few sessions that I'm looking forward to attending. Here they are:

RailsConf is a multi-track conference, with tracks including Distributed Systems, Culture, Growing Talent, Testing, APIs, Front End, Crafting Code, JavaScript, and Data & Analytics. There are also Beginner and Lab tracks, which might be suitable to those looking for a learning & training oriented experience. As you might be able to tell, the sessions I'm interested in cover a mix of performance, open source, and front-end dev. As I've become a more experienced Rails developer, RailsConf has been more about seeing what's going on in the Rails community and what the future holds, and less about the technical nitty-gritty or training sessions.

Stay tuned for a handful of blog posts from the conference!


Comments

published by noreply@blogger.com (David Christensen) on 2015-04-06 22:10:00 in the "Conference" category
I recently just got back from PGConf 2015 NYC.  It was an invigorating, fun experience, both attending and speaking at the conference.

What follows is a brief summary of some of the talks I saw, as well as some insights/thoughts:

On Thursday:

"Managing PostgreSQL with Puppet" by Chris Everest.  This talk covered experiences by CoverMyMeds.com staff in deploying PostgreSQL instances and integrating with custom Puppet recipes.

"A TARDIS for your ORM - application level timetravel in PostgreSQL" by Magnus Hagander. Demonstrated how to construct a mirror schema of an existing database and manage (via triggers) a view of how data existed at some specific point in time.  This system utilized range types with exclusion constraints, views, and session variables to generate a similar-structured schema to be consumed by an existing ORM application.

"Building a 'Database of Things' with Foreign Data Wrappers" by Rick Otten.  This was a live demonstration of building a custom foreign data wrapper to control such attributes as hue, brightness, and on/off state of Philips Hue bulbs.  Very interesting live demo, nice audience response to the control systems.  Used a python framework to stub out the interface with the foreign data wrapper and integrate fully.

"Advanced use of pg_stat_statements: Filtering, Regression Testing & More" by Lukas Fittl.  Covered how to use the pg_stat_statements extension to normalize queries and locate common performance statistics for the same query.  This talk also covered the pg_query tool/library, a Ruby tool to parse/analyze queries offline and generate a JSON object representing the query.  The talk also covered the example of using a test database and the pg_stat_statements views/data to perform query analysis to theorize about planning of specific queries without particular database indexes, etc.

On Friday:

"Webscale's dead! Long live Postgres!" by Joshua Drake.  This talk covered improvements that PostgreSQL has made over the years, specific technologies that they have incorporated such as JSON, and was a general cheerleading effort about just how awesome PostgreSQL is.  (Which of course we all knew already.)  The highlight of the talk for me was when JD handed out "prizes" at the end for knowing various factoids; I ended up winning a bottle of Macallan 15 for knowing the name of the recent departing member of One Direction.  (Hey, I have daughters, back off!)

"The Elephants In The Room: Limitations of the PostgreSQL Core Technology" by Robert Haas.  This was probably the most popular talk that I attended.  Robert is one of the major developers of the PostgreSQL team, and is heavily knowledgeable in the PostgreSQL internals, so his opinions of the existing weaknesses carry some weight.  This was an interesting look forward at possible future improvements and directions the PostgreSQL project may take.  In particular, Robert looked at the IO approach Postgres currently take and posits a Direct IO idea to give Postgres more direct control over its own IO scheduling, etc.  He also mentioned the on-disk format being somewhat suboptimal, Logical Replication as an area needing improvement, infrastructure needed for Horizontal Scalability and Parallel Query, and integrating Connection Pooling into the core Postgres product.

"PostgreSQL Performance Presentation (9.5devel edition)" by Simon Riggs.  This talked about some of the improvements in the 9.5 HEAD; in particular looking at the BRIN index type, an improvement in some cases over the standard btree index method.  Additional metrics were shown and tested as well, which demonstrated Postgres 9.5's additional performance improvements over the current version.

"Choosing a Logical Replication System" by David Christensen.  As the presenter of this talk, I was also naturally required to attend as well.  This talk covered some of the existing logical replication systems including Slony and Bucardo, and broke down situations where each has strengths.

"The future of PostgreSQL Multi-Master Replication" by Andres Freund.  This talk primarily covered the upcoming BDR system, as well as the specific infrastructure changes in PostgreSQL needed to support these features, such as logical log streaming.  It also looked at the performance characteristics of this system.  The talk also wins for the most quote-able line of the conference:  "BDR is spooning Postgres, not forking", referring to the BDR project's commitment to maintaining the code in conjunction with core Postgres and gradually incorporating this into core.

As part of the closing ceremony, there were lightning talks as well; quick-paced talks (maximum of 5 minutes) which covered a variety of interesting, fun and sometimes silly topics.  In particular some memorable ones were one about using Postgres/PostGIS to extract data about various ice cream-related check-ins on Foursquare, as well as one which proposed a generic (albeit impractical) way to search across all text fields in a database of unknown schema to find instances of key data.

As always, it was good to participate in the PostgreSQL community, and look forward to seeing participants again at future conferences.


Comments