Constructor Vs Setter: A Better Way

This post originally appeared on DZone at https://dzone.com/articles/constructor-vs-getter-a-better-way

One of my jobs as a Zone Leader at DZone is to syndicate content; that is, read posts from the giant firehose of articles from our MVBs and select the best ones for republishing on DZone (aside; we’re always looking for new MVBs. If you want your blog to be read by thousands of people then click this link). My interest was particularly peeked when I saw “Constructor or Setter” on the Let’s Talk About Java blog. I’m always interested in the ways different people code in the hope I can learn something new, or pass on some knowledge.

Honesty from the start; I hate setters. They’re generally the wrong thing to do and represent a code smell from bad wiring. Wherever possible (which is most of the time) you should pass everything in you need to the constructor and generally assign it to a final field. I don’t think that’s going to blow any minds as it’s general good OO practice. If you’re writing code using TDD you’ll never inject something via a setter.

The example used in the article is an interesting one, though. Your code base is evolving and as a result, you end up with multiple constructors due to evolving optional client demands; they’d like the option of having notifications on a trigger, they’d like the option of a snapshot etc.etc. which results in the below code.

public class SomeHandler {
   public SomeHandler(Repository repository, Trigger trigger) {
       // some code
   }
   public SomeHandler(Repository repository, Trigger trigger, SnapshotTaker snapshotTaker) {
       // some code
   }
   public SomeHandler(Repository repository, Trigger trigger, Notifier notifier) {
       // some code
   }
   public SomeHandler(Repository repository, Trigger trigger, SnapshotTaker snapshotTaker, Notifier notifier) {
       // some code
   }
   public void handle(SomeEvent event) {
       // some code
   }
}java

Everyone can agree this code is ugly and confusing; so many overridden constructors are far from ideal, and they’re the wrong thing to do. From the original article:

“Why do we have so many constructors? Because some dependencies are optional, their presence depends on external conditions… we should place in the constructor only the required dependencies. The constructor is not a place for anything optional.”

So far so good. The solution proposed is to effectively use setters, although not using “setX”, but instead enable() methods where the optional dependency is set.

public class SomeHandler {
   public SomeHandler(Repository repository, Trigger trigger) {
       // some code
   }
   public void enable(SnapshotTaker snapshotTaker) {
       // some code
   }
   public void enable(Notifier notifier) {
       // some code
   }
   public void handle(SomeEvent event) {
       // some code
   }

There is, however, another way which I strongly advocate people to take which I believe is cleaner in design and code, and will have fewer bugs.

The problem here is the premise that these dependencies are optional. In reality they are not; whether they are present or null, a code path will have to be called. In the example given, the code would likely look like this:

public void handle(SomeEvent event) {
     Data data = repository.getData(event);
     if(snapshotTaker != null){
         snapshotTaker.snapshot(data);
     }
     if(notifier!= null){
         notifier.notify(data);
     }
     trigger.fire(data)
 }

Those two if statements dramatically increase the cyclomatic complexity of this method. You would need tests to make sure it functions when:

snapshotTaker and notifier are present
snapshotTaker and notifier are null
snapshotTaker is present and notifier is null
snapshotTaker is null and notifier is present

And that’s before you get to testing the original functionality in the class!

In reality, these fields are not optional as there are code paths exercised. Plus, null is a terrible thing. Instead, the code should be reduced to the single constructor which takes all three fields. If the instantiating class doesn’t need the “optional” functionality then it owns the responsibility of creating and passing in a no-op version.

In the examples, let’s assume notifier and snapshotTaker only have one method, notify and snapshot respectively. This makes this really easy in Java 8:

new SomeHandler(new DBRepository(), new EmailTrigger(), data ->{}, data -> {});

Alternatively, we can have specific no-op classes for clarities sake.

new SomeHandler(new DBRepository(), new EmailTrigger(), new NoOpSnapshotTaker(), new NoOpNotifier());

As a result, there are no if statements in the executing code, making it clearer to read and simpler to test, and less error prone.

The case of optional subscribers is particularly interesting, though. Often these may not be known at construction time, in which case it makes sense to have a specific subscribe (Subscriber sub) method. Often though it makes sense to allow multiple subscribers so these can be stored in a List. Again, if there’s zero subscribers it’s no big deal as the List is owned by the class and will never be null.

If we apply this to the case of the notifier in the previous example:

private final List<Notifier> notifiers = new LinkedList<>();
  public SomeHandler(Repository repository, Trigger trigger, SnapshotTaker snapshotTaker) {
        // some code
        this.repository = repository;
        this.trigger = trigger;
        this.snapshotTaker = snapshotTaker;
    }
        public void handle(SomeEvent event) {
        Data data = repository.getData(event);
        snapshotTaker.snapshot(data);
        for (Notifier notifier : notifiers) {
            notifier.execute(data);
        }
        trigger.fire(data);
    }

Clean code with zero complexity. I Hope this offers a compelling alternative in the battle between constructors and setters.

Continuous Deployment With Heroku and Github

This post originally appeared on DZone.

The movement to Continuous Deployment (CD) has been gathering speed and is widely acknowledged as the way to go. Code is checked in, an automated suite is run, and if passed it is automatically deployed into production. A story is not “done” until it is in production, providing value to the end user and CD gives us the smallest mean time to production for our code. To get to this point we have to have a lot of faith in our test suite and the code base, which ensures we will write more robust systems to cope with this way of working.

There still isn’t a huge heap of tooling available to build a continuous deployment pipeline; it tends to be something people have manually crafted using tools such as Puppet, Ansible and Chef. That’s why when I went to put a project up on Heroku for the first time in a while I was pleasantly surprised to see it now supports building your code from GitHub and continuous deployment from that repository.

Let’s first discuss what Heroku brings the table. It’s a great place to deploy your applications and services in a scalable fashion. You can pretty much just drop any application in any language onto Heroku and it’ll spin it up for you, accessible for the world. Your app is scalable on a simple web dashboard too; start out with a single dyno, but increase if you need the capacity. There’s lots of awesome addons you can throw on too such as papertrail for log alerting and HTTPS certificate hosting. The addons vary in price, but to get a simple process up and running is totally free.

This tutorial presumes you’ve signed up for an account at heroku.com and you have an existing Java web project you’d like to set up for CD which is already in Github.

Step 0: Github Build

As with any good continuous integration you want to make sure your tests are all run first. Fortunately this is easy to do with Github and TravisCI. You can sign up using your Github account atTravis-CI.org which will then allow you to easily create automated builds which are triggered on every check-in.

Once you’ve signed in, you will be given a list of all your repositories. Flick the toggle switch for the project that you’re setting up, in my case WizChat.

Image titleTravis requires a config file to be placed in the root of your project names .travis.yml. Although there’s a variety of options you can choose to setup, for the purpose of this example I went with the simplest options; telling Travis the project is Java and to build using Oracle JDK 8

Image title
Check this in and your first build will be automatically triggered. If everything is setup correctly in your project, and your tests are passing, Travis will give you a green build. Either way it should send you an email to let you know.

Image title

For bonus marks, once you’ve completed your first build you will see a badge indicating the build status. Click this and you’ll be given a variety of ways to integrate the build badge into other sites. Convention dictates placing it at the top of the readme for your project. As my readme is in Markdown format I simply copy and paste the Markdown syntax provided:

Image title

Setting Up Your Project for Heroku

As mentioned before, it’s possible to throw pretty much anything at Heroku and have it run. All it requires first is the creation of a Procfile, used by Heroku to know what to run and how.

For web projects it only requires a single line. Again, there’s lot of extended config that you could dig into if you need finer grain control, but the following should be sufficient to get your web project going:

web: java -jar target/wizchat.jar

This tells Heroku the command to run, and that the project is a web project so it needs to let us know what the port is which is done via a system variable. You need to ensure that your project runs on whatever port is handed to it by Heroku else your application won’t run. For example, this is done in SparkJava using the following syntax:

port(parseInt(ofNullable(System.getenv("PORT")).orElse("8080")));

For our Procfile to work it also relies on us having a runnable Jar. This can easily be achieved using the Maven assembly plugin, or the Maven shade plugin.

<plugins>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>com.samatkinson.Main</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                    <finalName>wizchat</finalName>
                    <appendAssemblyId>false</appendAssemblyId>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

This code builds a single fat executable jar with all the dependencies built in. You must set the mainClass and finalName and Maven does the rest.

With everything checked in we now head over to Heroku and login to the dashboard. In the top right is an arrow. Click on it and select “Create new app”.
Image title
Create a name for your app and select in which region you’d like to run it.
Image titleYou can then choose the deployment method: select Github and sign into your Github account when prompted. This should integrate and pull al of your repos in. Search for and select the repository you’re integrating and press “Connect”.
Image titleFor the final step, select “Wait for CI to pass before deploy” and click Enable Automatic Deploys. You will now have automated deployment of your application to Heroku every check in if the CI passes! Your application will be available at <appname>.herokuapp.com.
Image titleI highly advise setting up the Heroku Toolbelt so that you can then tail the logs of your application to make sure it started correctly.

Your Application Probably Doesn’t Need a Database

This post originally appeared on Dzone here.

Ok, I’m being a little facetious here, but it’s certainly my default position. Developers seem to love to integrate databases into their applications without a thought for the requirements, despite the fact they end up moaning a lot about the database (and, if you’re in a big organization, the database team). It just seems to be the default position — “I am building an application so I will need a database and I will need Spring and I will need MQ and I will need <Insert defaults here>”. This is the wrong way of thinking.

People love habits. Once we’ve fallen into them it’s nearly impossible to escape, and countless books have been written to try and help. But as the old saying goes, the first step is admitting you have a problem. You’re addicted to Hibernate and Postgres. Sure everyone else is too, but that doesn’t make it right.

Let me say off the bat that databases aren’t inherently a bad thing. I don’t dislike them in the way I do Spring for example. Databases can be great and wonderful and exceptionally useful.  It’s just that it has become the default position, when in fact there are a lot of applications where a big, separate database isn’t necessarily needed.

So What’s the Alternative?

No Database!

RAM is a wonderful thing. It’s cheap. It’s incredibly fast. The more of your applications data you have in memory the faster it’s going to run. I can’t emphasise how much easier it is to store data in memory.

There’s no need to have a complex object mapping layer. There’s no need to deal with transactions. You just have data in Objects and it’s so easy to work with.

Obviously this won’t work for every application, but it will work for more than you think. The criteria:

  • Stateless service.
  • You must be able to bootstrap any data from other sources at startup.
  • Assuming you aren’t able to have a big cluster of these apps for failover purpose, you need to be dealing with information small enough that the bootstrap process is quick.

This obviously works well for aggregating services that sit atop other data sources, a surprisingly common workflow which is going to become more common with the increasing popularity of microservices.

Phone a Friend

If you’re dealing with a fairly large amount of data and it requires processing in your application, you can quickly get to a point where start-up times are huge if bootstrapping from external sources. In a resource constrained environment, this can be a big problem.

It may also be impossible to bootstrap the data from downstream all the time. In finance for example a lot of data is generated for “Start of Day (SOD)” from downstreams, but offer no intraday sources. This means that if the application goes down intraday there would be no way to restart it without some sort of data store.

But still, put down the database to start with. In this system, we have a dump of data at the beginning of the day, and we then receive updates throughout the day. This is still stateless; each instance of the application will be identical as there is no state that is unique to each node.

If the application goes down we need a source of the updates we’ve received since start of day then you need to phone a friend.

Unless there’s a catastrophic failure, as long as you have a single node up it can provide the missing data to it’s peers. The mechanism is up to you; HTTP, MQ, file dump, whatever floats your boat. It’s just a way to bootstrap a new node.  Everything’s still kept in memory.

There is obviously the risk that all nodes die and you can’t get your state from anywhere else.  If this could be a problem in your system, why not try local files?

Files

Files are awesome. They’re well understood, can be human readable, have a great set of APIs around them and are fast. It is relatively simple to write an application’s state out to file, particularly if you’re using an event sourced system — you don’t need to edit “records”, you just have a timeline of events which you can replay in.

Files are also a great way to speed up the recovery process; instead of sending a huge dataset over the wire from a peer, you can bootstrap from local files up to the point of failure, then just request updates from your peers for after that point.

This really does work in production. One of the systems I worked on was designed for speed and used an event log file for data recovery. It was super fast and quick to recover, and we never had to interact with the database team which was a huge bonus.

SQLite

Perhaps you’re really unwilling to give up on SQL. Maybe you don’t want to have to write and maintain your own serialization (I’ll admit, it can be a burden if not done well). Don’t rush back to your centralized DB! SQLite is a brilliant, super lightweight SQL implementation which stores your database locally in flat files. It’s super fast, it’s in very active development, and has a whole heap of benefits (read more here). It doesn’t even run as a separate process; just include the dependency in your project and start using it.  You get to have all the semantics of a database, but without dealing with anything remote.

None of These Apply to My System!

I’ll admit that not every system can get away with a local file for storage. Some genuinely do need global access and consistency, in which case I encourage you to go crazy. But, if you’re designing a new system (or you’re struggling with your current one), don’t instantly jump to Postgres or MySQL. You could save yourself a lot of time and pain, as well as speeding up your application, by designing your system to store data in memory or on flat file.

It’s all so quiet

Hello!
The blog hasn’t been silent because I’m lazy or busy; A few months ago I became a Zone Leader for Dzone.com.  This has me churning 3 articles a week out over there.  It’s been an interesting experience on many levels; primarily seeing just how angry people on the internet can get when someone expresses an opinion, but also seeing how posts I though were amazing have had relatively low hits, and posts I thought wouldn’t interest people have gone on to be super successes.

I’ve now got permission to repost my articles here, which I’m going to start doing with my favourite ones.  You can see them all along with comments and likes by heading to this link though.

Below are 3 of my most “popular” articles so far which I’ve enjoyed writing; they’ve had a ton of hits and/or a lot of comments.

 

Exceptions in Java: You’re (Probably) Doing It Wrong

This has gotten really popular since the editor changed the title to this slightly more provocative one.  People simply won’t let checked exceptions go! Fortunately there’s also lots of people that do understand it and have offered support.

Upgrade Your Code Conventions

A list of some things that I do differently on my code bases to most people.  It’s generated some good discussion, and it’s also found some good trolls.  The one with the most opponents is not using @Override, which in fairness I’m not that passionate about.  I’m glad that hopefully people have reevaluated their coding.

Disposable Architecture: The Tiny Microservice

Not so many comments but a lot of views.

 

Accessing Amazons Product API using Clojure

I recently embarked upon a new project with some friends called Swirl (swrl.co). We decided to write it in Clojure; we’re all Java developers by trade but wanted to give clojure a go as a bunch of smart people we know keep saying it’s amazing.  I’ve definitely formed some opinions, but that’s for another day (or over a beer).

The purpose of swirl is to store your recommendations. Nothing annoys me more when I finish reading a book and I can’t remember who recommended it to me in the first place. Now, if I recommend something to someone I do it through swirl, and they can then respond on the site so I know they’ve actually watched/read/listened to whatever I recommended.

The first port of call was Amazon integration.  Amazon offers an API (The Product Advertising API) to search the site for products and get the details back (and wrapped as an affiliate link).  What I quickly learnt was that the API is horrible to work with for 2 reasons.

1) No JSON option.  Anger!!

2) Every call must include a signature, which is an MD5 hash of all the other arguments and is time-sensitive.

For a read only product API the signature seems massively overkill.  This wouldn’t be such a big issue if we’d been using Java, but as I was grappling with Clojure syntax for the first time it meant I got stuck for a long time.  There’s default implementations for most languages, so I wanted to make my implementation available for anyone else to use.

The code is available on gist at https://gist.github.com/samberic/27439e09ee336cf3dc61.

The two key methods here are search-book, which uses the API to search through all of amazons catalog to return a series of results, and get-book which retrieves the details of a specific book by ASIN.  The url creation (e.g. the tricky bit) is done in search-url.

(defn search-url [bookname]
  (createEncryptedUrl (assoc (timestamp params) :Keywords bookname :Operation "ItemSearch" :SearchIndex "Books"
                                                )))

There are a bunch of parameters I’ve hard coded- amazon key and associate tag, service and version (no need to change these) and ResponseGroup.  ResponseGroup lets you choose what you want returned from your search, and you must have at least one.

I merge this standard list with the current timestamp, the book to be searched for, and the operation and search index.  If you want to search for things other than books, you can simply change the search index. Once we have a full map of parameters we can use this to create the signature.

 

The signature is made of an alphabetically-sorted (hence why params is a sorted-map) list of all parameters joined using “&”, and properly encoded (this was a real gotcha: I had to do string replace on +/%20 in as neither form-encode or url-encode got the job done properly. Hack!), preceded with “GET\nwebservices.amazon.com\n/onca/xml\n”, which you then put through a “RFC 2104-compliant HMAC with the SHA256 hash” using your amazon private key.

 

After much bashing of head against desk, the amazing Dan Flower pointed me to Buddy, a clojure security library we already had in the project anyway which took care of the hashing in a clear and concise way.

I hope this helps someone out somewhere.

How to “Maker” you tests clear

Tests should be your primary means of documentation in a system. I like to think we’ve all moved passed the idea of using comments in code (except for APIs), and most of us know to strive for clean, self documenting code. But for me tests are the easiest and most powerful way of documenting a section of code. If I’m looking at code and having a WTF moment, I should be able to flip to the tests and see what the intended behaviour was.

The problem is that, despite good intentions, writing clear and well documented tests is hard. Really hard. Particularly if you’re working with legacy code which has tight coupling, making it hard to split out the bits that you care about.

I recently discovered a test when going through a system which looked not unlike this:

@Test
public void bankTransferWillIncreaseDestinationBankAccount() throws Exception {
    BankAccount bankAccount = new BankAccount(20.0);
    final BankTransfer mock = mockery.mock(ConcreteBankTransfer.class);

    mockery.checking(new Expectations(){{
         allowing(mock).accountFrom(); will(returnValue(“accFrom”));
         allowing(mock).accountTo(); will(returnValue(“accTo”));
         allowing(mock).name(); will(returnValue(“Jon Smith”));
         allowing(mock).transferId(); will(returnValue(“143NMd24”));
         allowing(mock).ccy(); will(returnValue(null));
         allowing(mock).overdraftLimit();will(returnValue(200.0));
         oneOf(mock).amount(); will(returnValue(20.0));
     }
     });
    bankAccount.apply(mock);
    assertThat(bankAccount.amount(), is(40.0));
}

This is a dramatic reconstruction of the actual code. No programmers were hurt in the creation of this code.

The thing is, there were three tests in the class, which all had this vomit of “allowing”. This code smells badly. Mocks should be used to mock behaviour. In this case the BankTransfer object is to all intents and purposes a  POJO. Bad behaviour.

But the bigger issue here is that it’s impossible to see what’s going on. Does the fact ccy is null matter? What about the transfer ID? Does the overdraft have something to do with it?

This test is woefully unclear. Even if we remove the mock abuse and replace with a real implementation it doesn’t help.

@Test
public void bankTransferWillIncreaseDestinationBankAccount() throws Exception {
    BankAccount bankAccount = new BankAccount(20.0);
    final BankTransfer mock = new ConcreteBankTransfer(“accFrom”, “accTo”, null, 20.0, 200.0, “Jon Smith”, “143NMd24”);

    bankAccount.apply(mock);
    assertThat(bankAccount.amount(), is(40.0));
}

Sure there’s less code, but I have no idea what each field in the constructor means. If anything it’s less clear now which fields matter.

This is where Maker’s come in. A Maker allows you to construct an object whilst saying “I don’t care about any values except these specific ones”. Let’s look at the final code before showing the implementation.

private Maker aBankTransfer = a(BankTransferMaker.BankTransfer);

@Test
public void bankTransferWillIncreaseDestinationBankAccount() throws Exception {
    BankAccount bankAccount = new BankAccount(20.0);
    BankTransfer bankTransfer = make(aBankTransfer.but(with(BankTransferMaker.amount, 20.0)));
    bankAccount.apply(bankTransfer);
    assertThat(bankAccount.amount(), is(40.0));
}

For me this is infinitely clearer. The code does exactly as it says; it creates a BankTransfer (and we don’t care what the looks like), but we specify that it must have an amount of 20.0 as this is the value that we care about for our test. Very terse, clear and also reusable. Anywhere that needs a BankTransfer object can reuse this.

To use Maker, you need to import Nat Pryce’s “make-it-easy” (Maven details at http://mvnrepository.com/artifact/com.natpryce/make-it-easy). Then it’s simply a matter of creating your Maker.

There’s a fair amount of boilerplate code, and it can be quite tiresome to build a Maker. As a result you may want to think carefully before starting to use them everywhere.

public class BankTransferMaker {

    public static final Property<BankTransfer, String> accountFrom = newProperty();
    public static final Property<BankTransfer, String> accountTo = newProperty();
    public static final Property<BankTransfer, String> name = newProperty();
    public static final Property<BankTransfer, String> transferId = newProperty();
    public static final Property<BankTransfer, Double> overdraftLimit = newProperty();
    public static final Property<BankTransfer, Double> amount = newProperty();
    public static final Property<BankTransfer, ConcreteBankTransfer.Currency> ccy = newProperty();

    public static final Instantiator BankTransfer = new Instantiator() {

       @Override public BankTransfer instantiate(PropertyLookup lookup) {

            BankTransfer bankTransfer = new ConcreteBankTransfer(
                lookup.valueOf(accountFrom, random(5)),
                lookup.valueOf(accountTo, random(5)),
                lookup.valueOf(ccy, new ConcreteBankTransfer.Currency(random(3))),
                lookup.valueOf(amount, nextDouble(0,100000)),
                lookup.valueOf(overdraftLimit, nextDouble(0,100000)),
                lookup.valueOf(name, random(10)),
                lookup.valueOf(transferId, random(10)));
  
            return bankTransfer;
           }
       };
}

For each constructor parameter we need we create a Property value, which we have to type correctly. This is where a lot of the frustration can come from, as you have to manually build up this mapping.

We then create an Instantiator; this is how our object is actually created. You can set the default values to whatever you want; I’ve used apache-commons to plug random values in, because I really don’t care what’s there.

When it comes to creating test objects you can then modify any and all of these with the builder-style pattern seen in my original code base. We can change multiple values too, like so:

make(aBankTransfer.but(
with(BankTransferMaker.amount, 20.0),
with(BankTransferMaker.transferId, “Octopus”)));

It’s a really nice way to give clear visibility to which values matter in your test.

Reasons I love IntelliJ #1

Recently I’ve been  interviewing a lot of candidates at work, most of whom use Eclipse and have very little experience with keyboard shortcuts that seem to be default with IntelliJ users. I guess if you’ve made the effort to go out and look for a better IDE then it’s indicative that you’re keen to be as efficient and effective a developer as possible, which I think is hugely important. If you’re going to spend over 9 hours a day doing something then learning at least the most basic shortcuts is the least you can do to improve your code output.

 

Inspired by this I put a video together to showcase the features that excite Eclipse users the most when they see them for the first time, or if you tube’s not your thing, read on for gifs and transcript.

This is effectively an exercise in loving the shortcut alt-enter. If anywhere in IntelliJ you see something red or grey, hit alt-enter on it and IntelliJ will create or fix it for you.

Create New Variables

variable

One of the really cool parts of IntelliJ is how it writes most of the code for you;  as a user you don’t have to do much typing.  In this shortcut, Ctrl/Cmd+alt+v will create a new variable for you from a newed up object.

Create new class

new class

Write the name of a class you want but doesn’t exist, hit alt-enter on it and IntelliJ will create the new class for you

Create new methods

method

Yet again, write the code you want to see in intelliJ and hit alt-enter and it will create it for you.  Write your new method name, hit alt-enter and the method is created.

Adding Parameters

parameter

If you need to pass something into a method in intelliJ, just put the parameter in and hit alt-enter and it will add it in.  This becomes immensely helpful if you have tens/hundreds of instances around your code base; write it once and intelliJ will add in the parameter and allow you to set a default parameter.

parameter2

 

Code Templates

code template

IntelliJ has a whole host of code templates that allow you to write code with very little text! Why write public static void main when you can write psvm and hit tab.  iter allows you to auto generate for loops.  And you can write your own templates which is very cool.

Generator

generator

IntelliJ is smart enough to know standard coding patterns.  If you pass parameters into a constructor you probably want to create fields for them; alt-enter on the grey text (which indicates it’s unused) will allow you to create new fields for constructor parameters.  Using alt enter on the new parameters or using the generator (alt-insert windows/ctrl-n mac) you can then easily create getters for those fields

Why I hate Spring

When I started my career I really fell in love with Spring. I went long on it. I threw it into all of my projects. I even managed to chuck a bunch of Spring Integration in there for good measure. I was like an XML king.  I was building was a custom RPC layer based over JMS, protobufs and Kaazing to be used across our department and further around the bank. “It’s so configurable” I would say. “It’s just a few XML files, it’s really flexible” I would say. I was pretty pleased with myself.

The thing is, there were some people around me that tended to disagree. They were having issues getting this wired together how they wanted, they didn’t know which spring XML files they needed where. There were issues with spring versions and getting the right versions together (I’d also gone long on modularisation; there was about 5 or 6 different modules at different version numbers with no obvious way, other than an email from me, to know which to use). I didn’t notice these smells; I just thought it must need more documentation or that the people using it were being stupid. This is a pattern that repeats itself too; for one of the most disliked and difficult to use frameworks internally currently cries for help are often met with “It’s one file and some parameters, it’s not that hard” whilst everyone else wastes days trying to get the magic combinations of files and parameters to make something happen.

I’m still in the same organisation, and in my new role I’m a consumer of my old framework. This dogfooding has now led me to hate 2009/10 era Sam for several reasons, but mostly for spring. Spring is evil on a good day, but when it’s included as part of a consumable library or API it becomes next level, like a love child of Hitler and the devil. Don’t let Spring leak out of your APIs.

There are a number of reasons why Spring sucks which I felt the need to document, as nothing’s showing up on google as a concise argument against it.

  • Configuration in XML : I’d like to think that as a profession we’d moved beyond XML. It’s incredibly verbose but that’s just a minor starting gripe. Much more importantly, I don’t want to program in XML. The wiring of all of your classes together is a hugely important part of your application. You’re a java developer, not an XML developer. One of the beauties of java as a language is compile time safety. In my springless applications I can hit compile and have a 100% certainty everything’s built, plugged in and ready to work. In applications I work on with Spring, you hit run, wait for 30-60 seconds whilst it initialises beans, before falling over. In the modern world we live in this is insane, particularly when you multiply that up over a number of integration tests where you need to spin the container up and down.   There’s also a special place against the wall for the “It means I can change my implementation without recompiling!”.  No one does this.  Ever.
  • Magic : At this point the usual come back is “you can do it all via annotations now! No more xml!”.   Whilst not programming in XML is swell and all, annotations are still magic.  Until you run your app you’ve no idea if it’s wired up correctly.  Even then you don’t know it’s wired up correctly, only that it’s wired up.  I don’t like magic.
  • Importing other Spring files : This is currently the item that causes me the most rage.  I’ve discovered there’s a tendency to break Spring files down into smaller spring files, and then scatter them across modules.  I’ve just spent 2 weeks of crawling through jars trying to find the right combination/order/version of spring files to make something actually run.  Spring files in jars is a bad, bad idea.  Terrible.  Every time you spread dependent spring files across jars a child dies.
  • Complexity : When interviewing candidates the most common answer to “any pitfalls to Spring?” is that it has a steep learning curve. Whether or not that’s true is another discussion for another day, but I wanted to highlight the fact that Spring is now so complex that it has it’s own framework, Spring Boot. A framework for a framework. We are in Framework Inception, a film about Leonardo Di Caprio trying to find his long lost java code by going deeper and deeper through layers of XML and annotations before eventually giving up on life.

The thing is, I’m sure it’s possible in theory to use Spring well in an application.  I’ve just yet to see it happen, and this is the problem.  And for me all of the “benefits” it offers are perfectly possible without it.  When asking about Spring as a part of our interviewing process and the standard answer is “it means you have clean code, separation of concerns, and it’s really good for testing”.  All things I’m huge fans of (in particular the testing part) but the simple fact is these are not outcomes of using Spring, but outcomes of programming well.  Perhaps Spring is a good crutch for new developers to use to be introduced to the ideas of dependency injection and mocking and testing, but the simple fact is they’re orthogonal.  If you TDD your code you’ll find no getters and setters, only constructor injection which you can mock for tests, and then when you’re putting your application together, just use the often forgotten about construct, the “new” keyword.  We often build a class called “ApplicationContext” which is in control of wiring everything together.  It’s clean, everything’s testable, I have compile time safety and my tests run darn quickly.

 

 

My First Hackathon

image

This is a picture of me on hour 30 of 37 with no sleep.  Mild hallucinations, lots of shouting at my laptop and volumes of coke zero.  No, this isn’t a new interrogation technique, but Hong Kong’s first hackathon.

Angelhack is a US Company that specialises in hackathon and hackcellerators and runs events globally (and often, simultaneously).  The concept is simple; Bring yourself, maybe a team, an idea and your laptops, maybe some hardware, and in 24 hours go from no code to fully fledged working app.  This hackathon was mobile oriented, with the focus being on new apps (although not necessarily for phones; a smartwatch company had brought a bunch of their android based watches for people to hack).  You must bring NO CODE to the event with everything must be written fresh which will be checked for the victorious (somehow). Libraries are allowed so you could probably do most your work as a lib before and wire it at the event, but that’s not really in the spirit.

When I’d first seen the hackathon online, I signed straight up and then pretty much didn’t think about it again until the night before the event  (caveat: I had been drinking when I signed up) when I started to do a bit of googling around it.  The only similar event I’d done before was HK Startup Weekend and I’d been expecting a similar setup.  Oh how wrong I was!  Moreso, I was surprised by how little was available information wise about what the angelhack experience is actually like, which is why I wanted to write the experience up for future victims and to blunt my rose tinted glasses next time I want to apply.

First point of order: this is not a social/networking event like Startup Weekend where you create your teams on the day, do a lot of mingling and exchanging name cards, and come out with lots of new contacts and friends.  At Angelhack most people arrive as a pre-arranged team with an idea.  There’ll be a few stragglers (like me), and you’re able to hack solo or form a team on the day (like me), but I’d probably encourage you to go in with people you know and an idea you believe in. I was lucky enough to meet and pair up with someone, come up with an idea, and not end up killing each other.

Registration in the morning was terrible at this event, run by a HK Startup EventXtra whose systems kept crashing which meant everyone going through the system multiple times which was very slow.  This was mitigated somewhat by the Starbucks breakfast inside (my stomach dictates my mood, or at least has a significant influence on it).  After an hour of general shmoozing time & strategy planning (and stealing the best tables), and 2 hours (!) of sponsor presentations we got around to pitches.

Pitches are an opportunity for individuals and teams to pitch their idea and find people to join their groups.  It’s fairly poorly organised and as most people are pre-organized i’m not convinced anyone picked up new team members from this. As mentioned before, go with a team organised beforehand if you can!

Then comes the hacking.  It really is just 24 hours of coding the crap out of something and using as much brute force and hacks to get it (to look like it is) working.  I’ll go into detail what we built and the tech behind it in another post. Elevator pitch- it’s a push-to do list so you can nag other people. Use case: fiancé reminding me to take washing out the machine after she’s gone to work. It was a hell of a learning experience from both a technology perspective and as an individual.  I’m proud of what we managed to put together in that time frame compared to how much I managed to get done when coding personal projects normally.  Most the teams didn’t code thought the night, with a number of people passed out over tables or on the supplied bean bags, and some people even just went home and came back the next day.  But for the true experience, stick it out.  You won’t regret it after you’ve recovered (which took me about a week).

At the end of the hacking came the presentation. I can safely say that whilst I wanted to do well my enthusiasm was waining as I was feeling immensely broken. There were something like 24 teams presenting, so there’s a lot of waiting around. At ours, we got 3 minutes to present to a panel of 4 judges. I think this is one of the most disheartening things I’ve ever been through; after 24 hours with no sleep busting my nuts, with mad dash sprints to get little beautiful features working, to then get just 3 minutes to show it is nowhere near long enough and is quite an affront to how hard some of the teams worked. It’s a very different mentality from a group like startup weekend where everyone is encouraged to present no matter what state they got to. Hopefully it’s something they can look to improving in future iterations

The judges narrowed down the running to 9 teams (of which one went MIA), who presented to the whole room.  By this point I’m very tired and very frustrated from all the waiting around, and desperately in need of some proper food and sleep.  Delirium had definitely set in, with the stupidest and simplest of jokes setting me and my programming partner off laughing uncontrollably for minutes at a time.  Then, more waiting! Judges make their decisions and announce, prizes handed out, big group photo and then FREEDOM!  A lot of the groups had piled out of the venue when discovering they weren’t in the running to win, meaning the eventual group photo was lacking a lot of the attendees. 

It’s now some months down the line, and if the opportunity to do this again came up I think I probably would.  It’s a huge learning experience, and an opportunity to actually churn out some software.  We’re planning on finishing the app, although I went back into the codebase a week after and it’s a mine zone, which is to be expected really. 

Top Tips if you’re going to/thinking of going to a hackathon:

  • Definitely go. I was on the verge of bailing because I was in one of my unsocial “I can’t be bothered to talk to other people” modes.  I always inevitably end up enjoying it when I go so I forced myself, and even though I was being grumpy, managed to find a great team mate and project. 
  • Bring subtle entertainment; whether it be a kindle or your pocket backlog or whatever. There’s a lot of procession and rubbish before you actually get to do any development. Also sit at the back so you don’t get called out for not paying attention to the sponsors
  • Bring layers! As you can see in the photo, I look utterly ridiculous like a hobo Where’s Wally, but the aircon was maxed out and so I had to throw on all the layers I had.  Bring gear for any occasion
  • Speaking of gear; bring lots! Cables, phones, tablets, adapters, routers etc.  You never know what you’re going to need.
  • Set your expectations; sure, there’s a grand prize of getting into an accelerator, but you’re probably not going to win.  It’s incredible the stuff that they  turned out at our hackathon.  The winning team built a personal eBay with instagram filters, Facebook and twitter AND PAYPAL integration, in a slick, working application.  Another team built a working multimeter which had hardware that plugged into the audio jack.  I didn’t even know what a multimeter was.  People are good at this.  Go in with the aim of having fun and learning something.

Hope that sums it up. Whack a question in the comments if there’s anything specific anyone wants to know.

Fighting information obesity

Hi. My name’s Sam, I’m addicted to information and suffer from information obesity. I’ve always been a big fan of information, but I’d say I only started to gorge myself in the last year or so.  My brain is overweight with the volume of information I’m putting in it. I don’t even enjoy it anymore but I couldn’t stop.  However just recently I’ve been kicking the habit, and you can too!

Firstly, you need to know the symptoms of Information Obesity so you can self diagnose. Do you

  • Wake up in the morning, turn your alarm off, and read your email/facebook/twitter straight away?
  • Spend all of your public transport journeys immersed reading social networks/feedly feeds/the interwebz?
  • Open your email/social push notifications AS SOON AS YOU SEE THEM
  • Constantly check your phone so you can open your notifications AS SOON AS YOU SEE THEM
  • Feel like your brain is constantly slightly fuzzy?

Chances are you’re suffering from Information Obesity.  I too was suffering this, but don’t worry. You too can be cured.

I highly recommend reading this Gizmodo article on turning your smartphone into a dumbphone; it was the inspiration for my experiment.  I genuinely just found that at any given moment I already had my phone out checking for facebook updates or twitter posts or emails. God forbid I let an email go unread for several minutes!  So I decided to try out a dumbphone diet, although not quite as extreme as Gizmodo.

  • All push notifications were switched off, except for messaging apps (whatsapp etc)
  • All social type apps (fb, twitter, email) moved to the last screen on my iPhone
  • No checking any of those things on the phone; only on proper laptops

I set off with the goal of managing a week, and I’m now punching over 2 weeks.  And it’s such a relief! I encourage you to give it a go.  Things I have discovered

  • On the first day I really, really struggled to not check my email.  Addiction level difficult (ex smoker preaching here).  This went through the roof when I saw I had over 20 or so emails on the little red notification on the mail app (by accident).  I managed to make it till 5PM at which point I gave in.  And my biggest discovery was that not a single one of those emails was of importance.  I’d been lucky enough to have been retweeted by Scott Hanselman after promoting his email list (which is epic, btw), hence the massive (for me) dump of emails.  
  • And so with that, I realised that life goes on without me.  No email needs answering desperately within 6 hours. It became easier. I moved email off to another screen to reduce the chance of accidentally looking at it, and it makes for fairly smooth sailing
  • I assigned time in the morning before work to sit and read/reply to emails.  This was really cathartic, and meant I actually responded to emails as opposed to letting them hang around for a week and started the day clean.  Pro Tip! I read somewhere on twitter about starting the day with a glass of cold water (supposedly better than coffee).  Can highly recommend it. 
  • I’d also sit down after work to go through what’s going on.  Confining my email and socials to these period allowed my brain to breath
  • Commuting is really interesting without being locked into a smartphone.  It allowed me to think.  As cheesy as it sounds, I’d gotten to the point where I was so busy reading the entire internet, I didn’t get chance to think about the day ahead, and to just let my brain go free for a bit. It’s brilliant!

I have to admit after the end of week 1 I’ve not been as stringent on these rules, but things are definitely better now.  I miss a lot of twitter reading, but it’s nothing I can’t live without.

So, if you think your brain is fat and overweight with the volume of information you’re taking in, why not put it on a diet?  Try it for a week!