Saturday, January 30, 2016

How to read performance tests report and use it to create reliable SLA (by example)

Analysis presented in this post are mostly suitable to analyse performance of web pages and services (especially web services) but also can give a hint how to approach performance report analysis of different application types.

If you have configured your performance tests well you will finish with rich report having many cool graphs and numerical data. This post will explain how to work with such a report well. It will also show what data should such a report contain for good application analysis. You can use it to know what extra widgets should be added to your report.

In the last post I've describe how to create decent performance tests resulting with pretty report. Let's use this loadosophias report as an example of a good performance tests report and describe the parts I find the most useful in everyday work.

(In screens below I am using old loadsophias report layout as it has all of the graphs on one tab which makes legend below easier. All graphs described below are present on the new layout and can be found easily.)


Although the whole report is filled with interesting and important statistical data we should analyse, there are five things that should be analysed first (according to numbers on the picture above).

Summary and why this is not enough for creating a reliable SLA

Test summery information (1) is the data you will normally end after running tests with typical test tools and should only be a start in well configured report. Those information are only really useful to compare the results to previous runs. It gives some overview of the system, although it doesn't explain why those values are as such - so it is not sufficient to create an SLA. Why? Such document should be a contract about what conditions we guarantee to client and with what probability.
For example SLA should state that we ensure that 99% of requests will end in at most 100 ms. Saying that avg response time will be 100 ms says nothing, because it can mean that 50% will be 190 ms and 50% will be 10 ms. If client would then set timeout to 110 ms he will get problems in 50% of cases - which is in most cases unacceptable and against SLA we've agreed to.

Response time diagnosis

To create a meaningful SLA or to check application performance in case of response time we need to see how those values were behaving through the time of test and what is its distribution.

First big input for creating an SLA is Overall Response time distribution & Quantiles section (2). This graph can literally fill our "X% will end in Y ms" section of such a document.
For example basing on our test, Google could say that 90% requests will end in 400 ms. Or that 87% of requests will end in 300 ms. Or even, 92% of requests will end in 500 ms, but 50% will end in less than 250 ms
The question of how to write it in documents itself depends on the client profile. This data lets you to adjust SLA for the clients needs, but also gives you important information about your application performance.
In this case one could ask why 7.5% of requests are so far from the general distribution. Most requests end in 200 - 400 ms here, but the other group suddenly take 500 - 600 ms. Such a big deviation can mean application, hardware or even net problems. It's not rare that diagnosing such fact can improve general performance, not only performance of 7.5% requests. 
I'm not saying that you must fight about performance of those 7.5% of requests. maybe even 600 ms is absolutely acceptable for your application? However you should always be able to answer why those 7.5% have that deviation in response time.
Additional input in response time analysis brings Response Times Distribution Over Time graph (3). It can additionally say when (according to test time) response time deviations happened. In our example we can see that the long response times appeared at the start of the test and than diapered. It can be an important tip, that whatever was the cause of such response times it disappered over time or is only appearing from time to time.

Response codes for the rescue

Response code over time graph (4) is a simple, but powerful tool for diagnosing web applications. First it can tell if your application is working correctly (200 response code). If error pages appear, it will tell you when. It also should be compared to transactions per second (TPS), which can be really helpful.
For example we can imagine that errors (for example code 500) can appear when test exceeds 15 TPS and it overloads our system (for example backed timeouts). 
In other case our application can experience occasional error codes. This can represent temporary backend problems or web instability. This can sometimes be absolutely acceptable, but sometimes can be a prediction of future troubles. Therefore again it is really important to know why those occasional errors appear to predict if they will grow and at what scale.
If we predict that some % response codes would be errors (for example when we do some cyclic backend operation that will make our services unavailable for couple of hours) we need to write it in our SLA also.
Measuring response codes is extremely important in web applications performance tests and unfortunately often ignored. I've seen more then once a situation when developers measured performance of an application and were extremely happy of an unexpected performance grow. They didn't notice that it was caused by a big percentage of errors caused by backed malfunction. Simple report they were using (avg TPS and response time) had no indication of response code problems.

The story of TPS

Transactions per second graph (5) is the cherry on top of those graphs. 

In most cases SLA covers not only response times, but also is stating how many concurrent requests is our application able to handle in any given time.
For such SLA basing on this graph we need to pick a safe number. In this example probably 8-10 TPS would be a safe pick (having in mind that its end is just caused by threads finishing up testing).
Remember to not over-advertise your service. If you guarantee 9 TPS and the customer gets 15 he will be positively surprised and might hold you in high regard for quality of service. On the other hand if you would promise 15 TPS and deliver 9 you would probably deal with lots of bug reports and customer dissatisfaction.

In our example Response Count By Type  (5) gives you additionally information about Virtual Users number at the given time.  You can use that and give a big ramp up time in your test: For example you can set the test to run 64 threads, and give a rump up time of 640 seconds - every 10 seconds new virtual user will be added showing you how next concurrent user affects TPS.

This graph should also be used to determine TPS stability. If the VU number  is stable, is TPS stable as well? Its obvious that TPS will be jumping in some range (like +/- 5%).  If we would notice some more drastic jumps like working at pace of 100 TPS but suddenly dropping to 5 TPS we would be sure that we deal with something serious (network issues, software optimization, some process killing backend from time to time).
Of course no matter of how wide the range of TPS is you need to ask yourself yet again a simple question why.

Real treasury

The real fun begins, when you create such reports systematically (for example every week on the same hour). Such test should be also run after every big (or not) software or hardware change. Having such historical data will open you a door to grand new possibilities - to observe how changes in application, infrastructure, information stored in backend and even user load affects your application performance. 

The important thing here is never to be afraid to ask questions. There should never be any voodoo in good report interpretation. Every team member must know why those numbers appear and if there are errors to realize if and when they will become a problem

This report gives you great tools to detect abnormalities of your application. It also gives you realistic data for creating SLA for your clients. 

This short article is only a tip of an iceberg of possibilities that good reports after performance tests brings. I hope it will be a good start for you and an inspiration to dig further. 

Wednesday, February 12, 2014

Performance Tests made easy


Why performance tests?

When you are working on a project which has a web interface (like web-service, webpage or hosting server) you finally will come to a point you will need to start doing some performance tests. There are several reasons for such a necessity:
  • to know how much traffic you can serve without problems
  • to create realistic Service Level Agreement 
  • to check how well you scale while adding new servers
  • to make a baseline for performance improvements
Knowing how much traffic your product can handle is very crucial in making a decision if you need to invest time in new functionality to get more users, or in scalability or performance changes to handle the users you already have.
It is always important to have performance tests before you will attempt any performance related changes. Without baseline from before the changes you will have no idea if your work changed anything. It is not rare that developers are spending weeks on something that in the end happen to be a micro-optimization (or even the product is running slower), because they didn't took something under consideration. To not fallow that scenario you need to check if your optimization idea works. To do that, you need the baseline and performance tests to run again in order to compare.
On the other hand those tests cannot take forever to create. One needs to find a way to create them quickly, so it will not delay product development. If it would take days, some teams might sacrifice the need of such tests (accepting the risks) and ignore it until too late. On the other hand if it would take up to an hour every "mentally stable" developer team would like to do it an be left on the safe side.

Creating performance tests for web interface.

While creating performance tests, you need to remember to assure some crucial properties. They must:
  • be easy to rerun - you should be able to run them one after another with no problem on any time of day and night. This implicates tests automatisation. It is always good to ask yourself if you could add your test script to crone on some server, and only see test results after the test run. 
  • be configurable - such things as host address, number of threads, number of retries etc must be read from config, so you can easily change them and reuse tests on different environments.
  • have results easy to analyse - dumping results to file or only the simplest statistics calculated iare rarely enough. You need to model tests output in such a way, that it would explain not only what but also why. For example if request duration mean is around 100 ms, does it mean that all requests take around 90 ms - 110 ms, or are the requests around 20 ms but some of them can take even over 10 seconds and this is misleading the statistic.
This tutorial will show how to quickly create configurable, automated tests and how to visualize their results in such a way they are easy to analyse. It will take less than half an hour!

Creating a test

Let's test google! Our test will show us performance of http://www.google.pl/search?q={query} web interface As we want our test to be easily configurable, we will look at is as http://{host}/search?q={query}.

First, we need to get newest JMeter. Then we need to download zip of jmeter-plugins (standard set is OK) and copy contents of it's lib/ext to apache-jmeter-X.Y\lib\ext. Jmeter-plugins is a great set of JMeter extenstions.

Having our JMeter configured with jmeter-plugins, we run it in graphical (default) mode with a script you can find in apache-jmeter-X.Y\bin folder. First we will create a Thread Pool for our tests:





We are using ${__P(name)} tool to read values from property file, where:
  • ${__P(test.thread.max)} - number of threads we want to use
  • ${__P(test.thread.rampUp)}  - how much time to spawn the threads
  • ${__P(test.baseCount)} - number of tests each thread will do
Then we will add a sampler for our threads - Http Reqest:


where we use two additional configurable properties:
  • ${__P(google.host)} - to specify tested host
  • ${__P(google.query)} - to specify test query
Lets save our test plan as google.jmx. Then lets create google.propertes:
test.thread.max=4
test.thread.rampUp=1
test.baseCount=10
google.host=google.pl
google.query=loadosophia
Now we can run our test! Lets do it from console (after all that would be the way crone would run it each night)
java -jar ApacheJMeter.jar -n -t /path/to/google.jmx -q /path/to/google.propertes
Created the tree successfully using google.jmx
Starting the test @ Fri Oct 04 11:54:10 CEST 2013 (1380880450276)
Waiting for possible shutdown message on port 4445
Tidying up ...    @ Fri Oct 04 11:54:20 CEST 2013 (1380880460120)
... end of run
Tests are run! Lets see the results! If you have any problems You can consult files in the repo.

Analysing the results with loadosophia.org

Loadosophia is a great site, where you can upload jMeter results and receive a rich graphical report helping to analyse it. It has the Pay What You Want policy so you may test it and pay if you feel like it. All operations there are under https and signed with Your google account, so we may say it's decently safe to use. 
We will cover Loadsophia in details soon, but for now You only need to get Your Upload Token and create new project in your workspace named GoogleQueryTest.
Now let's open google.jmx in jMeter graphical interface and add Loadosophia.org Uploader:


We are changing the project name to the one we've just created: GoogleQueryTest, and we need to paste in the token from Your Upload Token, I also like to specify the folder to save in results (relative to where tests are run from) and to name test after the time it was run with ${__time(yyyy-MM-dd-HH:mm:ss)}. It helps in compareson of many tests over time.
Lets save the file, and open console. Lets create the results directory and run the tests:
$ mkdir results
$ java -jar ApacheJMeter.jar -n -t /path/to/google.jmx -q /path/to/google.propertes
Created the tree successfully using google.jmx
Starting the test @ Fri Oct 04 14:25:58 CEST 2013 (1380889558228)
Waiting for possible shutdown message on port 4445
Tidying up ...    @ Fri Oct 04 14:26:22 CEST 2013 (1380889582299)
... end of run
Now, we can see our results (I've made the results public for You to see. Normally it require to be member of project to see it's results).


What's next

This tutorial gives You the basic knowledge how to create performance tests in the manner of minutes. The next post will cover the tips how to read Loadosophia report and how to use it to make Your application better.


Graphic courtesy of lasvegassportsperformance.com

Sunday, June 2, 2013

Short jhat tutorial: diagnosing OutOfMemoryError by example

Last time we've learned what can be the reason of OutOfMemoryErrors and what tool-chain can we use to diagnose it. Today we will learn by example how to use the most important one: jhat.
I've prepared a sample project for this exercise, which can be cloned from github. The code is really simple and its  problem is obvious, but this simplicity will make it easy to get to know jhat.
First, we need to run our program. We will use small heap size, for two reasons:
  1. Program will throw the exception faster
  2. jhat will start more quickly, as the heap dump will be smaller
$ git clone https://github.com/petermodzelewski/snippets.git
$ cd snippets/OutOfMemoryErrorGenerator/
$ mvn package
$ java -Xmx128m -Xms128m -jar target/OutOfMemoryErrorGenerator-1.0.jar
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
        at pl.keyer.oome.generator.Sleepyhead(Sleepyhead.java:6)
        at pl.keyer.oome.generator.App.main(App.java:11)

We can notice, that the program is still running. We will need another console to run jhat. We are using the following commands:
$ jps -l
752 target/OutOfMemoryErrorGenerator-1.0.jar
4480 sun.tools.jps.Jps
$ jmap -dump:file=dump.map 752
$ jhat -port 7401 dump.map
Reading from dump.map...
Dump file created Sat Jun 01 23:25:55 CEST 2013
Snapshot read, resolving...
Resolving 561438 objects...
Chasing references, expect 112 dots................................................................................................................
Eliminating duplicate references................................................................................................................
Snapshot resolved.
Started HTTP server on port 7401
Server is ready.

Important notes about that process:
  • All commands must be executed by the same user: the java process owner
  • The "expect X dots" message is not a joke. While processing bigger heap dumps one can check the number of dots there in editor to see the progress, as it can take quite a while to process such a file.
  • When processing bigger dumps one must watch heap size of jhat itself. This depends on the case, but to be safe (provided with enough resources) jhat should have 2-4 times more heap size, than process heap it will diagnose. If memory size for jhat is too small it will just crush after using it and the process will need to be repeated with bigger amount of memory. For example to provide jhat with 4 gigs the command will be:
    $ jhat -port 7401 -J-mx4G dump.map
  • Diagnosed process may be terminated after dumping heap with jmap. 
  • Obviously jhat can be run on any machine where the dump will be present. On many occasions developers choose to zip the dump and move the debug process to machine more accessible for them and with enough ram. 
After executing the commands we can visit http://localhost:7401/

When facing with jhat You will quickly realize that this tool is from times, where such tools were designed without consideration of prettiness or usability. This tutorial will show how to navigate it in most cases - all it's features are cool and necessary, but everyday programmer will use only subset of them to quickly diagnose where the OOME came from.
jhat main page can be divided into sections:

  1. List of all classes in your program (excluding platform - that is, all that is not from standard library). This list is normally really long and in most cases it is not necessary. Normally You will scroll down to "Other Queries" section right away. 
  2. More options for listing classes
  3. Bread and butter of memory debugging, we will use them in a moment
  4. More tools for debugging, but not as helpfull as section 3.
    • Heap histogram is sometimes useful to compare quantity vs size of objects
    • When you become jhat ninja, you sometimes could use OQL to diagnose the application. It is a SQL-like language for searching heap and calculating it's statistics. 
Firstly, lets see heap histogram.


This view illustrates very important fact, that jhat does not compute "deep" object size. That's why on top of our process' memory consumption histogram we see class [B which is really an array of bytes: byte[]. Array of bytes is often on top of such histogram and what's more - in most cases it doesn't mean that there is a problem. For example, if your program processes lot's of strings it will naturally have lot of byte arrays, as each string object contains reference to some byte array. Similar problem will manifest when we follow the "Show instance counts for all classes (including platform)".


That view is very similar to histogram sorted by quantity. Normally during OOMR we will look for "our" classes. We will need to exclude platform classes to easily detect abnormalities in our classes quantity (or size). A good start is to follow "Show instance counts for all classes (excluding platform)".

Things to look: unnaturally big numbers of objects of some class. In our example the number of workers. Our example illustrates the common problem, when producer creates tasks faster, than consumer handles them, and the producer - consumer queue will not block after limit is reached. 
Unfortunately in most cases it is not as easy. Diagnosing that objects of some class are eating too much memory is one thing. Diagnosing why and where are they allocated is another. To do so, we need to track the objects references to something "active", for example thread, controller, etc. When given that "active" object we can than analyze the algorithm and find out why so many objects are created. 
To illustrate such process lets track class [B references.



jhat enables going through references as long as one needs. Unfortunately when you click on class "page" it will display all it's instances. To dodge that time consuming view, you can copy link of the class page (for example from histogram view) and construct reference by Type view link. For example class [B page can have the following url:
http://localhost:7401/class/0x3881d790
so the reference by type summery will have url as follow:
http://localhost:7401/refsByType/0x3881d790

Those methods normally are all you need to detect the memory trouble in the process. Additionally in case you are working with application container (like tomcat) and having problems to see what application (or pool) is leaking the objects, you should diagnose the Classloader section of class page:


That's it. All you need to know about jhat to start your own memory problem debug. Hope it'll help.

One last tip: Many developers after solving their memory problem while running new application version, are taking heap dump and running jhat with it - just in case. They are often terrified that despite their efforts object count is still grand but somehow OOME is not appearing. Of course they were so focused on fighting the memory leakage that they've forget how GC works. Remember: Always trigger GC collection (for example with visual vm) to clean old gen from unnecessary objects before taking the dump and analyzing it with jhat. 

Monday, May 27, 2013

How to diagnose OutOfMemoryError in your JVM application?



Types of OOME

So here You are, running Your brand new java application. Or maybe finally the day Your application gained real user traffic came. Your java process started to show in top screen at last and Your server gained some load. Everything was great and suddenly "bang!" - OutOfMemoryError. You restart Your application hoping that was some strange thing and sometimes it helps... for day or so. So You search web to find out how to give java more memory and You double JVM heap size. Sometimes it helps... for two days or so. 
If You see OutOfMemoryError in Your logs, it can mean one of these (percentage of cases based on my real life experience) :
  1. You have memory leak and You need to diagnose it
    (10% of cases)
  2. Your application is not using the memory right
    (89.9% of cases)
  3. Your application just needs more memory
    (0.1% of cases)
Some of You may be surprised that there is something like memory leak in a JAVA program. We must understand that GC surely handles most of the memory allocation issues, but is not a telepath. If You are holding references to objects You are no longer using, and they are still accessible from your applications main reference tree - GC will not free those objects' memory. 
If Your process downloads hundreds of thousands of records from the database to the memory and than analyses them - Your application is not using the memory right. While writing an applications You must predict and control memory usage of the program, so You don't end with great number of objects in Your memory (which can not fit there).
An example of such mistake can be observed here:

ResultSet rs = stmt.executeQuery(query);

while (rs.next()) {
    executorService.submit(new MyTask(rs.get("id"));
}

executorService.shutdown();
// ... waiting for finish

In most cases in situations like that, task execution takes more time than retrieving next row. While not executed, MyTask objects wait in executorService in-memory queue. Given result set big enough, the memory will end and error will be thrown.
Such errors are nasty, because they appear some time after the application launches. At first, when application has little data, such result sets are little and there is no problem. Application lives, new features are added, things are changed and suddenly OutOfMemoryError appears destroying the application. Such memory usage bug could be with the application from beginning and finding it can cause troubles, as most paniced people will target new functionalities believing that they are responsible for "the new bug".
Sometimes Your application just needs more memory. JAVA is kind'a memory consuming and when program runs many threads at the same time, and each of them takes some memory - You need to provide more memory for JVM.

Facts about OOME

  • Any *Error including OutOfMemoryError destabilizes JVM and it will not perform normally until it will be restarted. 
  • The place OutOfMemoryError  is thrown from typically is a random one. It will be thrown first time an object cannot be allocated, not necessarily (and rarely) of the class responsibility for memory problem
  • You can see OutOfMemoryError is closing by GC behaviour - it will desperately find at least some of the space to use. GC full collect will increase in numbers big time.

How to fix OOME?

When facing OutOfMemoryError Your first task is to diagnose what fills the memory. Your first instinct will be to give java first memory, but it is not likely to help in the long run, as it is rarely the problem with heap size. In most cases it only buys You some time.
Fortunately all the tools necessary to diagnose the problems comes with JVM and You already should have them in Your PATH (if not, You can find them close to other java package binaries) 
Here are the steps You should fallow to diagnose the problem. What is also important: You can run this procedure (and normally will) after OutOfMemoryError occurred. 
  1. You must be logged as the owner of the process You want to diagnose.
  2. Get the PID of the process You want to diagnose. You can use Your OS monitoring tool for that, or just use jps
    $ jps -l
    32513 sun.tools.jps.Jps
    9693  MyApp.jar
    
  3. Next You need to dump the heap to a file using the pid of our process
    $ jmap -dump:file=dump.map 9693
    Dumping heap to /tmp/dump.map ...
    Heap dump file created
    
  4. Such a damp isn't good for reading on Your own. Good someone came with jhat. It is web-based browser through this dump file.
    $ jhat -port 7401 dump.map
    Reading from dump.map...
    Dump file created Mon Nov 26 10:44:53 CET 2012
    Snapshot read, resolving...
    Resolving 9942216 objects...
    Chasing references, expect 1988 dots....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
    Eliminating duplicate references....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
    Snapshot resolved.
    Started HTTP server on port 7401
    Server is ready.
    
    Sometimes the default amount of memory taken by jhat is too little, so You must provide it with more:
    jhat -port 7401 -J-mx4G dump.map
    
When You type http://yourhost:7401 You will see jhat web interface. It takes some getting used to. Here is a tutorial that can help You out. When You know what-is-what in jhat, You must find out:

  1. What objects eat up most of memory 
  2. What threads eat up most of the memory
  3. Sometimes - what class loaders eat up most of the memory (when handling application servers to determine which application created the problem). 
If diagnosed threads consume predicted amount of memory - then You're lucky and tuning thread number and heap size will solve Your problem (You might also think about getting new server for scaling the workload horizontally)
In most cases You will need to find the lines of code that produces new instances of problematic class and check if You are not making too many new objects before using them. Maybe some queue grows too quick or list is too big? 
If none of this is true - memory leak occurred and You will need to analyze what is holding references to the objects eating too much memory. That is probably the hardest one to debug. 

Photos courtesy of wikia: 1, 2.

Saturday, May 18, 2013

On language popularity in first quarter of 2013

Recently I found this great graph showing language popularity basing on stack overflow and github tags:

On the upper right hand side we see the most popular languages: the front line. Most of them are really not a surprise: JAVA, PHP, C/C++, C#, Obj-C, PERL or Ruby. Those are the once that have been in that spotlight for a while now. What's interesting is the strong position of JavaScript (man that language grows) and Python with its second youth sponsored by indie games.
Not so far away (in compare with older reports) is the second wave of computer languages. One could say that those are the newcomers which are building it's community, but there are some old friends too. The unquestionable leader of the second wave is Scala almost ready to join the mainstream (I've separated it in a one dot set on the graph). The other new popular JVM languages are also there: Clojure and Groovy, although it could have been predicted as those three were gathering bigger and bigger community all the time. It's about time for them to slowly replace JAVA in some applications. 
Interesting that the mentioned old friends: Prolog, Haskel or Lua are there. Those three are really passing the test of time - always in the shadows of mainstream languages but never going down. 
What really got me thinking was how far Rust was from D. Is it that C-family programmers are not so eager  to try out new things?
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change."
One thing is certain - new times require new tools, and more and more people realizes that everyday. Mainstream languages will be there with us for long time, but the faster can we adapt to the "second wave" the better our situation on job market will be, and what's more important: the more exiting our everyday work will be. The second wave of languages is on its constant way to join the first way, or even to step into it's place. 

Friday, May 17, 2013

Atmosphere 2013

Earlier this week  I had a great pleasure to participate in the first edition of Atmosphere - an e-commerce industry conference. I must admit I went there full of doubts. Can the  first edition of such an anonymous event be a success, especially that it was a payed conference? Fortunately it can! When I arrived I saw the whole place  buzzing with people talking about all the cool stuff. 
It all started with refreshing keynote from Brian McCallister. It was interesting to listen that all startups travel the same evolutionary path. The refreshing part was hearing that "this is OK". That those technical debts we make at the start of a project are caused by good intentions and important reasons. How to mix this general truth with rules of software craftsmanship is yet to be discovered for me. 
"There are two types of startups. Those successful and a little ashamed of their code and those who're out of business"
Continuing tale of the startups Paul Hammond spoke about choices you can make to not run out of money during the first months of the project. That was a really great talk divided into few simple lessons.
My colleagues from Toruń working on Allegro Charity Platform (running for example wosp.allegro.pl) gave a great talk about technology and architecture they've used to handle the traffic of latest WOSP final. 
Talking about handling Allegro traffic our Poznań division did a great job sharing some of our experiences in that field fallowed by lecture of both: technologies and methodologies we use to handle such a high traffic with good site responsiveness.
There were plenty more interesting presentations there but those above are my personal favorites. I'm really looking forward to publication of the videos to watch the ones I missed from other tracks.
Personally I had an honor to give a speech about choosing the programming language for a project covering JAVA, PHP, Erlang and Perl as main characters. I would like to thank all the attendees for being there and I hope you had a good time taking part in this trip through magical lands of programming languages.
The conference was a blast. On top of great speakers, delicious food and wild party; all of the attendees left with their own raspberry pi  as a gadget. I have some plans how to use mine and I will share it with you if I succeed.
I'm looking forward for next years edition. Hope to meet you there too!

Photos courtesy of Atmosphere Facebook page.

Thursday, May 16, 2013

Hello World!

It's been a while since I had a blog. Because many people during conferences asked me if  I had a blog - I've decided to start one. I believe I have some cool stuff to share. I hope you will find it interesting. Don't hesitate to leave comments to let me know how do you like it.