2018. február 25., vasárnap

Spring framework

Although I'm working with Spring on a daily basis, sometimes it's good to revisit the basics of a framework / concept. So I did, and here are my notes. Cheers!

Configuring the Application Context

Inversion of Control

AKA Dependency Injection
Objects are injected at runtime.


You can set profiles like develop or production to be able to branch the code.
@Profile(“develop”)     --- this anniotation makes sure to run that method.

Bean Scopes

  • singleton: Scopes a single bean definition to a single object instance per Spring IoC container.
  • prototype: Scopes a single bean definition to any number of object instances.
  • request: Scopes a single bean definition to the lifecycle of a single HTTP request; that is each and every HTTP request will have its own instance of a bean created off the back of a single bean definition. Only valid in the context of a web-aware Spring ApplicationContext.
  • session: Scopes a single bean definition to the lifecycle of a HTTP Session. Only valid in the context of a web-aware Spring ApplicationContext.
  • global session: Scopes a single bean definition to the lifecycle of a global HTTP Session. Typically only valid when used in a portlet context. Only valid in the context of a web-aware Spring ApplicationContext.

e.g.: @Scope(“singleton”)


Since Spring 4.0 every class which is loaded into the Bean Factory gets at least one Proxy.

Annotation based configuration

@Component scan

This indicates that a class should be loaded to the bean factory.
Dependency injection is achieved through @Autowired.


Field level: private attributes can be autowired, but not a good practice.
Setter Injection: inject at the setter method.
Constructor Injection: good practice,  inject at the constructor. Class become immutable.

InventoryService inventoryService;

public OrderServiceImpl(InventoryService inventoryService) {
this.inventoryService = inventoryService;

Lifecycle methods

@PostConstruct: method being run after constructor.
@PreDestroy: method being run before destructor is called.

Bean lifecycle

Initialization phase:

  • Begins with creation of ApplicationContext
  • BeanFactory initialization phase
  • Bean initialization and instantiation
  1. Bean definitions loaded: from Java configuration, Bean configuration, Component scanning and aut configuration
  2. Post-process bean definitions: BeanFactory is loaded with references 
  3. Instantiate Bean: Bean pointer is still referenced in Bean Factory, Objects have been constructed, BUT not available for use yet!
  4. Setters: beans are fully initialized; all initial dependencies are injected - beans still not ready for use
  5. Bean post-processor (pre- and post-init): almost the same as Initializer
  6. Initializer: @PostConstruct methods are called here; after the step beans are instantiated and initialized, dependencies have been injects, BEANS READY TO USE

Aspect Oriented Programming (AOP)

Aspects: reusable blocks of code that that are injected into your application during runtime.

  • Logging
  • Transactions
  • Caching
  • Security
Cross-cutting concerns: In aspect-oriented software development, cross-cutting concerns are aspects of a program that affect other concerns. These concerns often cannot be cleanly decomposed from the rest of the system in both the design and implementation, and can result in either scattering (code duplication), tangling (significant dependencies between systems), or both. - So there are code parts which are concerned at other places which would cose a lot of code duplication.

In Spring AspectJ is handling the aspecting.

Parts of a Spring Aspect:

  • Aspect: A modularization of a concern that cuts across multiple objects. Transaction management is a good example of a crosscutting concern in J2EE applications. In Spring AOP, aspects are implemented using regular classes (the schema-based approach) or regular classes annotated with the @Aspect annotation (@AspectJ style).
  • Join point: A point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution. Join point information is available in advice bodies by declaring a parameter of type org.aspectj.lang.JoinPoint.
  • Advice: Action taken by an aspect at a particular join point. Different types of advice include "around," "before" and "after" advice. Advice types are discussed below. Many AOP frameworks, including Spring, model an advice as an interceptor, maintaining a chain of interceptors "around" the join point.
  • Pointcut: A predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP: Spring uses the AspectJ pointcut language by default.

2017. november 28., kedd

Udemy Docker notes

I was doing a course on Docker at Udemy ( https://www.udemy.com/docker-mastery/ ), here are my notes about it.
Kudos to Brett Fisher (https://twitter.com/bretfisher) who has done an amazing job creating this course. Thanks!


New command syntax

  • docker container ls
  • docker container ls -a
  • docker container start

What does docker container start actually do?

  1. Looks for the image in the image cache
  2. Then looks in remote hub (docker hub default)
  3. Downloads latest version
  4. Starts a new container based on that image
  5. Gives it a virtual IP !!INSIDE!! the docker engine on a private network
  6. Opens up port 80 on host and forwards to port 80 in container (with the --publish command)
  7. Starts container using the CMD in the image Dockerfile

Containers aren’t really VM-s: THEY ARE JUST PROCESSES

CLI process monitoring

  • docker container top: list running processes in a specific container
  • docker container inspect [ID/NAME]: gets metadata for that container (like volume, config, etc..
  • docker container stats: shows live performance measures for all the containers running

Getting a shell inside a container

  • docker container run -it: start new container interactively (with CLI, e.g. with bash)
  • docker continer exect -it: run additional command in existing container (no ssh needed!!)

-it means: interactive (keeps session open to keep terminal input), -t pseudo-tty (simulates a real terminal, like what ssh does)

Docker networks
-p: exposing ports
BUT: you don’t have to expose all the time, you can create sub virtual networks which are going to “understand” and “see” each other, so you can just define a communication without exposing ports.
docker container inspect --format '{{ .NetworkSettings.IPAddress }}' webhost: just the IP . NEAT
Communicating with two virtual networks (subnetworks) is only able to do by going out to the exposed ports to the outside and communicate over there.

Docker network CLI

  • docker network ls: networks
  • docker network inspect: get metadata from a network
  • docker network create --driver: creates a network
  • docker network connect / disconnect

With docker swarm this is easier to do!

Docker container DNS
Docker daemon has a built-in DNS server that container use by default.
Note: IP-s are not good way to communicate, use names instead! Apps can fail and get new IP address, but it can fall back to a name with a different ID still!

Container images
What’s in an image:

  • App binaries and dependencies
  • Metadata about the image data and how to run the image

Download an image from Docker Hub.

docker pull [OPTIONS] NAME[:TAG|@DIGEST]

Image layers

  • Image layers:
  • docker image history nginx

    • history of the image layers
    • basically an image layer is a new command / change to the previous one, e.g. like the base image layer is ubuntu, and then apt-get -ing something (like mysql), and that’s going to be the next image layer
    • On the picture above you can see how caching is done: if 2 images are using jessie image layer, it is not going to be dupicated, both images will use the same jessie image (layer)
    • Image layers are saved and basically attahed together for an image (so it can save space)
    • docker image inspect nginx
  • Image tagging and pushing to docker hub
    • a very similar process like git
  • Dockerfile
    • instructions how to build our image
    • FROM: which is the starting image
    • ENV: set environment variable
    • RUN: run commands
    • EXPOSE: expose port  (you still have to run -p if you would like to expose this to the outside) - so basically I am ALLOWING the image to be exposed, but I still have to do it explicitly!
    • CMD: run a command when container is run
    • General best practice: keep the less changing things on the top of the Dockerfile and and the more changing at the bottom

Persistent Data

Persistent data for images.
docker volume ls

Bind Mounting
Maps a host file or directory to a container file or directory.
Basically just two locations pointing to the same file(s).
That’s really good for development - binding local files to the containers

You can use sub containers to communicate with each other. Neat!
docker compose up - this is starting all the containers which are defined in docker-compose.yml file.
With docker-compose you can manage a multi-container environment easily.

Docker swarm

Server clustering solution.
Can orchestrate the lifecycle of your containers.
docker swarm init - enables swarm mode in an environment (so you can run your own swarm in your computer)
With the swarm API we are not directly communicating with containers, rather we are just communicating with the orchestration which will make sure the commands will be executed, and e.g. if a service dies, it will make sure it re-runs it!

Routing mesh
The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.


Basically compose files for swarms.
docker stack deploy
Docker compose ignores deploy and swarm ignores build. Separation of concerns.
docker-compose cli not needed on Swarm server!
1 stack → 1 swarm!

Easiest “secure” solution for storing secrets in Swarm.

  • usernames, passwords
  • TLS certificates and keys
  • SSH keys
  • Any data you would prefer not be “on front page of news”


2017. szeptember 30., szombat

React portals!

React 16 is here!
This new version is extremely interesting not just because it has some new cool features, but it's a complete rewrite from the start!
It's fascinating how well TDD (Test Driven Development) can work in real-lief, huge projects like React itself, since the aim was to create the new React while fixing all the tests that the "old" React had. And they did it! Step by step, day by day, with hard work, but they did it. And even new features! Kudos to the React team and especially Dan Abramov!
There are few new features like:

  • fragments and string: basically don't have to return one single elemnt in render
  • better error handling: the new <ErrorBoundary><MyComponenentWhichCouldError /></ErrorBoundary> component which "polyfills" the wrapped component with a new lifecycle method called componentDidCatch which works like a catch{..} in JS
  • Portals: later in this article
  • Better server side rendering: for me this is not a big change since I haven't really used SSR :P
  • Support fro custom DOM attributes: unrecognized React HTML and SVG elements are passed through the DOM now
  • Reduced file size
  • MIT license
  • New core architecture(Fiber): featuring async rendering which is basically a two step rendering mechanism with a commit and render phase. Here is a demo: https://build-mbfootjxoo.now.sh/ -- mind the square in the top right corner!
Let's talk a bit more about portals.
These are meant to solve out of the parent component communication, which could be done before but it was a bit hacky.
Now we can define a component which is going to refer to a DOM component which is not is not under the parent component (basically out of the current component's reach). This component is going to use a portal to refer to a DOM component.
Now we can use the component defined above in another React component without even knowing it's using porals! Neat!
Here is an example based on this pen. Mind that there is a component which is using a portal (Modal) and the other one (App) does not know Modal is using portals, and it is using it.

See the Pen React portals modal example ! by Adam Nagy (@nagyadam2092) on CodePen.

2017. szeptember 10., vasárnap

React beginner assignment

Sometimes I do computer science teaching and get stuck how to do it properly. I like to give a small theoretical sum of what a lesson is about but then immedately give an exercise where the candidate can try out the new concepts.
This time I was asked to give lessons on React and I thought the best way would be to give an assignment which will test whether the student understood the core concepts of it. So here is the assignment (and here is a solution: https://stackblitz.com/edit/react-assignment-nr-1-solution)
There is an App component which stands for the wrapper of the component and 4 other (child) components:
In summary:

  • App component: in it's state it has an appState which is initially 1 but it can be increased with a button.
  • CompOne component: waits for an appState prop, displays it, plus it has it's own state: compOneState which you could edit with a textinput (more info: https://facebook.github.io/react/docs/forms.html#controlled-components)
  • CompTwo: almost the same as CompOne: awaits for appState prop, displays it, and it also has it's own state: compTwoState which is initialized by 1 and it could be increased with a button.
  • CompFour: only props are required: appState, compOneState which it displays
  • CompThree: almost the same as CompFour - only props: appState and compTwoState, displays is.
This is how it should look like approximately:
Pseudo code:
        <CompFour />
        <CompFour />
        <CompThree />
        <CompFour />

  • App: red border
  • CompOne: blue border
  • CompTwo: green border
  • CompThree: black border
  • CompFour: grey border
 Have fun learning!

2017. július 6., csütörtök

Understanding JWT

Using JWT-s is a widely accepted way of authentication.
The reason why JWT-s are used is to prove that the sent data was actually created by an authentic source.
This means the client will get this token which is signed somehow with a secret (stay tuned) and with that the server can trust that client that it is already authenticated without having to handle sessions in memory.
It's very important to mention that the purpose is not to hide data! Let me show you this through an example.

Autopsy of a token

I would like to give a try for another approach by explaining how JWT-s work and it is by using an existing example and analysing it.
Let's have a token:
Separate them by the character "." and we have three parts. 
The first one is the "Header". It is encoded via Base64, which you can decode via your browser for example.
Try it in your console: atob("eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9")
This is going to return {"alg":"HS256","typ":"JWT"} , which stands for
  • typ: the type of the token which is JWT
  • alg: the hash algorithm which is going to produce a signature for the header and the payload
The key part here is that this is not even signed, you can read it anytime! You just need to decode the Base64 string via e.g. your browser or however you would like to.
The middle part is the Payload. Similarly you can decode it by calling
which stands for:
BOOM! Nothing fancy, just decoding some Base64 string, that's it!
These keys are mainly standing for who you are and when this token will be expired, but you can check out more here: https://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#rfc.section.4.1.6
The last part is the signature itself. NOW comes the interesting part.
The plan is the following how we can create a signature:
  • First you need a secret which is a string. In our case it is "That's a secret."
  • Then you'll need to Base64 encode your Header and Payload (remember, these are JSON strings!) - let's call them B64Header and B64Payload from now on.
  • After that you can sign these by passing two arguments for the chosen hashing algorithm (HS256 in this case)
    • The first argument is a string which looks like this: B64Header + "." + B64Payload
    • The second argument is the secret.
  • And the hashing algorithm returns then the signiture, which is going to be the last part of the token. Easy!
To make sure you understood everything visit https://jwt.io/ and try the token I provided in this article and make sure it's signed correctly (so it is a valid signature!)

2017. június 12., hétfő

Writing your own container in Go

This artice is about creating a tool which is like Docker, but much more simplified.

First of all, why do we need containers?

I like to think about containers as separate little planets who live in a universe. There is a planet for database management, other one is for an operating system, etc.. They don't care about each other, however you can create communication between them via e.g. radio waves. This could be  the interface between them.
In my world, what is good about these planets is that you can copy the exact same environment which it had, but who is going to live on these planets is up to that specific instance. (E.g. you can achieve that with quantum entanglements - this is getting a bit too fantasy-like haha.)
Let's get back to our containers. With these you can assure that if it works on your computer, it is going to work on any of them in the world - in theory. Saying that you can easily automate your processes for deployment, make development easier and more productive.
So - long story short - containers are really important for modern development environments.

Ok. How can I create one?

It's not that complex! Really!
First of all, I would like to shout out to Liz Rice, whose talk I saw at Craft Conference in Budapest few months ago. (similar video - twitter)
First, you have to understand two core features: control groups (cgroups) and namespaces.
So what are namespaces?
Well, in it's fundamental, namespace is what a user can see of it's environment: process id-s, file system, users, networking, hostname, etc.. And it's all yours, noone else's!
Ok now, let's see what cgroups are!
If we are sticking to the analogy before, cgroups are what you can use, like CPU, memory, disk I/O, etc.. So basically speaking about resources.
And now let's jump into coding!

As you can see it's not that big of a source code. Only 56 lines!
The basic idea is that we are going to run a system call inside a system call.  At the main function it is going to jump into the run function since first we passed the "run" parameter. (go run main.go run)
Basically what os.exec does is wrapping external commands so they can run in their own namespace. Uhm, excuse me? I have heard about this word before.
Well, no surprise, with this line of code we are going to achieve our own namespace, and we are going to have our own process id-s. Great!
You can see here that we are passing the new argument "child" which means it is going to call the child function.
The next interesting part is the 33rd line where we are passing a bunch of flags for the newly started process, these are:

  • CLONE_NEWUTS is for namespace
  • CLONE_NEWPID is for new process id-s
  • CLONE_NEWNS - this means unshare the mount namespace, so that the calling process has a private copy of its namespace which is not shared with any other process.
Neat! So now we now how we want to run our child process which has it's own namespace and cgroup, but it's still not working in it's own "world". We have to give it a filesystem which behaves like an ubuntu filesystem would work.
So that's where one another trick comes in: we have to have a linux root filesystem which will be the playground of our container. You can learn about the restrictions for the root filesystem here: http://www.tldp.org/LDP/sag/html/root-fs.html .
With this in our toolbelt we can change the root of our child execution to that particular root filesystem directory (46th line), and with the 48th line we are mounting /proc because it's a special kind of directory, and THAT'S IT, if you run go main.go run /bin/bash you will have your own little planet which is completely separate (well, now you know, it's not that separate, but it's acting like it is ;) ).
Enjoy and feel free to play around if you like these kinds of little aspects of modern software development / containerization!

2017. április 11., kedd

Favorite logical puzzles

This topic is a bit related to my JavaScript interview questions post, because I think it is a very good test to see how a candidate relates to a so called 'difficult' problem. If she just freezes, it is a bad sign, but if she starts to cover the options, that's a very good sign.
Here are few of my favorite puzzles: