Quantcast
Channel: Stuart Leitch (Stevie) – Don't Panic!
Viewing all 16 articles
Browse latest View live

SAML based Single Sign On (SSO) in Spring Security applications

$
0
0

Spring Security is a feature rich framework for handling security concerns in a web application. As standard, it has little support for SAML. However, SAML is now supported as an extension project – Spring Security SAML.

SAML

SAML (Security Assertion Markup Language) is an open standard that supports federated user login. That is, a user may authenticate to an Identity Provider (IdP) and then access an independent Service Provider (SP) without having to re-establish their identity. In practice, this usually means that a user provides their username and password to an application on one domain (the IdP) and can then single sign on (SSO) to the to an application on a different domain (the SP) without having to re-enter the username and password. Crucially, the SP is never even aware of the user’s password. So long as the SP trusts the IdP and the IdP trusts the user then the SP can trust the user too. SAML is the data format that allows this trust to be established and the user’s identity to be securely established on the SP.

Sample app

Spring Security SAML comes with an excellent sample app which can be set up in just a few minutes. After downloading the package from spring-security-saml on GitHub, the sample app can be run just by modifying a couple of files and deploying to Tomcat.  The quick start guide will get you as far as setting up the sample app to SSO with SSO Circle, a public free identity provider. In this case, the sample app serves as the SP and SSO Circle serves as the IdP.

When the sample app is running, it serves not only to demonstrate the code, but to assist with generation of metadata required by the Identity Provider (IdP). This is a necessary step to establish the trust relationship between the SP and the IdP.

Building into an existing app

Building Spring Security SAML into an existing Spring Security application is also fairly straightfoward. As a demonstration, I’ve added it to the legendary Spanners demo app. Download version 2.5 to see this in action.

For the most part, the spring-security-context.xml file is just a copy of the securityContext.xml file taken from the Spring Security SAML sample app. I’ve made a few changes to configure SAML the way I want and to configure the Spanners app security correctly.

IdP Discovery and selection

The SAMLDiscovery bean is responsible for choosing one of the configured IdPs to log in against:

<!-- IDP Discovery Service -->
<bean id="samlIDPDiscovery" class="org.springframework.security.saml.SAMLDiscovery">
    <property name="idpSelectionPath" value="/WEB-INF/security/idpSelection.jsp"/>
</bean>

The idpSelectionPath property defines a page that lets the user choose which IdP to login against. The Spanners demo app federates against Circle SSO only so there’s no point in showing this page. The SAMLDiscovery bean will automatically return the default IdP if no idpSelectionPath property is set:

<bean id="samlIDPDiscovery" class="org.springframework.security.saml.SAMLDiscovery">
    <!-- Do not show the IdP selection page. Always use the default IdP. There's only one configured anyway. -->
    <!--<property name="idpSelectionPath" value="/WEB-INF/security/idpSelection.jsp"/> -->
</bean>

 SAMLUserDetailsService

The SAMLUserDetailsService is similar to the Spring Security UserDetailsService interface. Annoyingly though, it’s a separate interface, not a sub interface. This means that any implementation of UserDetailsService that you already have will have to be reimplemented for SAML (I’m sure that a wrapper implementation of SAMLUserDetailsService that bridges to a UserDetailsService wouldn’t be too hard to make).

The SAMLUserDetailsService is optional. If it’s not provided, you’ll get an instance of OpenSAML NameIDImpl as your principal. This is a little fiddly to work with and is likely to cause cause issues if you’re converting an existing Spring Security project. Spring Security usually uses an implementation of  UserDetails as the principal.

I’d recommend creating an implementation of SAMLUserDetailsService that returns a UserDetails object. I created a trivial implementation that grants every logged in user a standard set of roles:

public class SimpleSAMLUserDetailsService implements SAMLUserDetailsService {

    public static final String DUMMY_PASSWORD = "DUMMY_PASSWORD";
    private List<String> roles;

    public void setRoles(List<String> roles) {
        this.roles = roles;
    }
    public Object loadUserBySAML(SAMLCredential credential) throws UsernameNotFoundException {
        String username = credential.getNameID().getValue();
        Collection<GrantedAuthority> gas = new ArrayList<GrantedAuthority>();
        for (String role : roles) {
            gas.add(new SimpleGrantedAuthority(role));
        }

        return new User(username, DUMMY_PASSWORD, gas);
    }
}

IdP / Metadata

The easiest way to configure the necessary metadata to establish trust with an IdP is to use the metadata display screen in the Spring Security SAML sample app. You could of course build these features into your own app too but you may not want to include this feature in a production application. For the Spanners app, I used the SAML sample app to set up all required metadata and then just copied the configuration for Spanners. I’ve removed the metadata display filter from the final application.

Security Annotations

Finally, I’ve configured method level security annotations as described in Protecting Service Methods with Spring Security Annotations.

Project status

At time of writing, this project is still sitting at RC (Release Candidate) status and it’s been over a year since the last RC release. However, there is still recent activity on the project and a recent forum post indicates that a GA (General Availability) release looks imminent.

 

 


Deploying to Tomcat 7 with Maven

$
0
0

The Tomcat7 plugin for Maven has a number of uses. In a previous post, I’ve looked at using it to deploy a build to an embedded Tomcat server for integration testing with Selenium.

A more simple use case is to simply deploy (or undeploy) a built artifact (war) to a Tomcat installation on a local machine or on a remote server.

The following examples are available to download from the Spanners Demo on GitHub.

Deploy a single artifact to localhost Tomcat

The Tomcat 7 plugin is added to the pom as follows:

<!-- Deploy wars to Tomcat 7 with mvn tomcat7:deploy or tomcat7:redeploy -->
<plugin>
    <groupId>org.apache.tomcat.maven</groupId>
    <artifactId>tomcat7-maven-plugin</artifactId>
    <version>2.1</version>
    <configuration>
        <url>http://localhost:8080/manager/text</url>
        <username>admin</username>
        <password>admin</password>
    </configuration>
</plugin>

Note that the URL ends with /manager/text. This is the Tomcat plain text based interface that Maven uses to invoke commands in Tomcat.

The username and password must correspond with a Tomcat user with the ‘manager-script’ role. This user is created in Tomcat by adding the following to the tomcat-users.xml file in the <TOMCAT HOME>\conf directory.

<role rolename="manager-script"/>
<user username="admin" password="admin" roles="manager-script"/>

The Maven artifact can be built and deployed to Tomcat with the following command:

mvn tomcat7:deploy

and removed with the following command:

mvn tomcat7:undeploy

If the war is already running in Tomcat and you want to make a new build and update it, use the redeploy goal:

mvn tomcat7:redeploy

If the war has already been built and is to be deployed without rebuilding:

mvn tomcat7:deploy-only

Removing the username, password and URL from the pom

If multiple team members are working on the same project, it’s not ideal to store usernames and passwords in the Maven pom file. For a start, you may not want other users knowing your top secret password. Even if this isn’t a problem, another team member may use a different username and password to access their Tomcat.

Maven allows server usernames and passwords to be stored on each user’s own machine in the settings.xml which usually lives in ${user.home}/.m2/settings.xml. Note that it is possible to encrypt passwords in settings.xml, but I won’t cover that here.

In settings.xml, a server is configured as follows:

<server>
    <id>tomcat-localhost</id>
    <username>admin</username>
    <password>admin</password>
</server>

Instead of specifying the username and password in the pom, we can specify the server by its id:

<plugin>
    <groupId>org.apache.tomcat.maven</groupId>
    <artifactId>tomcat7-maven-plugin</artifactId>
    <configuration>
        <server>tomcat-localhost</server>
        <url>${tomcat.deploy.url}</url>
    </configuration>
</plugin>

In the above example, I’ve also abstracted out the <url> as a Maven property. This allows individual users to override the default setting by using a profile defined in their own settings.xml:

<profiles>
    <profile>
        <id>localOverrides</id>
        <properties>
            <tomcat.deploy.url>http://localhost:8080/manager/text</tomcat.deploy.url>
        </properties>
    </profile>
</profiles>

This would be activated as follows:

mvn tomcat7:deploy -PlocalOverrides

Alternatively, the profile could be marked <activeByDefault>true</activeByDefault>.

Deploying multiple artifacts

The Spanners demo consists of three deployable wars: spanners-mvc (Spring MVC demo), spanners-struts (Struts demo) and spanners-ws (Spring Web Services demo). If the above plugin and config is added to all three POMs, they could all be deployed with a single command.

Better yet the plugin and configuration can be added to the root parent project, spanners-pom, in the <pluginManagement> section. Then each child project only need to refer to the plugin by name. The configuration is inherited from the parent.

<plugin>
    <groupId>org.apache.tomcat.maven</groupId>
    <artifactId>tomcat7-maven-plugin</artifactId>
</plugin>

Running the following from the project root will deploy all three applications:

mvn tomcat7:deploy

Deploying automatically on builds

It may be desirable to have Maven automatically redeploy the application(s) to a test server on builds. This can be done by binding a Tomcat plugin goal to a Maven execution phase. The following example shows how to automatically deploy all runnable artifacts (wars) to an integration test server when a new build is deployed to the Maven repository. This may be useful in a Continuous Integration (CI) scenario where we want some system running the latest build.

Note here two meanings of the word deploydeploy is a standard Maven lifecycle phase which uploads an artifact to a remote repository for sharing with other developers. The Tomcat 7 plugin also has a deploy goal which uploads and starts the application in Tomcat. In this example, we want Tomcat updated with the latest build whenever we put an updated build in the Maven repository.

<profiles>
    <profile>
        <id>update-itg</id>
        <build>
            <plugins>
                <plugin>
                    <groupId>org.apache.tomcat.maven</groupId>
                    <artifactId>tomcat7-maven-plugin</artifactId>

                    <!-- Deploy to integration test server on Maven deploy -->
                    <executions>
                        <execution>
                            <id>deploy-to-integrationtest</id>
                            <goals>
                                <goal>redeploy-only</goal>
                            </goals>
                            <phase>deploy</phase>
                            <configuration>
                                <server>integrationtest</server>
                                <url>http://itg.example.com:8080/manager/text</url>
                            </configuration>
                        </execution>
                    </executions>
                </plugin>
            </plugins>
        </build>
    </profile>
</profiles>

In this case the plugin is added to a build profile. This allows us to enable or disable it at command time. It’s disabled by default and can be activated by running:

mvn deploy -Pupdate-itg

The plugin redeploy-only goal is bound to the Maven deploy lifecycle phase. When this is run, the application is built, the artifacts (jars, wars and poms) are uploaded to the Maven repository and the updated application is started up on Tomcat, ready for manual or automated integration testing.

XRebel

$
0
0

ZeroTurnaround, the smart kids behind JRebel, have launched a new product: XRebel. And boy, it’s a good one! It’s described as “X-Ray glasses for your webapp”. It’s a performance profiler with features previously only seen in serious application performance monitoring (APM) solutions such as AppDynamics and New Relic.

XRebel vs APM

I’ve used AppDynamics in the past to diagnose application performance issues in development code. It’s a great application but designed primarily for complex production systems. The standard use case is to identify performance issues after they become a problem. So you’d leave it monitoring your production system and wait for it to alert you when something goes wrong. If and when something does go wrong, it’s great at identifying the problem but we’d rather have known about it before it landed in production. Preferably before it landed in test too.

XRebel is much lighter and designed to be run during development and debugging. It’s has the same ability to identify the causes of performance issues but is much lighter and simpler to use. It’s also considerably cheaper. Of course, it doesn’t have nearly the same features as a full APM but you can’t have everything.

Running XRebel

Setting up XRebel takes around 1 minute. No, seriously. The 14 day free trial will give you a zip file containing the xrebel.jar file. Point your application server at the jar as a javaagent option. In Tomcat, it looks like this:

xrebel javaagent

Restart Tomcat and run your application. Now that the XRebel agent is active, it will add a widget to your pages (using magic or something!). The widget shows key performance information including number of database calls and the size of the session data.

xrebel widget

XRebel can also be configured to alert on certain conditions – too many database calls, session data size too big and so on.

Have I done something stupid?

The widget above shows that 8 database queries were used to draw the page. That seems an awful lot for such a simple page. Clicking on the alert shows this:

xrebel stack trace

Hang on, SpannersDaoImpl.get seems to be called 7 times from the DisplaySpannersController. That’s not right. A quick glance at the code reveals a design mistake:

@RequestMapping(value = "/displaySpanners", method = RequestMethod.GET)
public ModelAndView displaySpanners() {

	List<Spanner> allSpanners = new ArrayList<Spanner>();

	// Load the IDs of all spanners from database
	List<Integer> spannerIds = spannersDAO.getAllSpannerIds();

	// For each spanner id...
	for (Integer spannerId : spannerIds) {
		// Load the spanner object from the database
		Spanner spanner = spannersDAO.get(spannerId);
		allSpanners.add(spanner);
	}

	return new ModelAndView(VIEW_DISPLAY_SPANNERS, MODEL_ATTRIBUTE_SPANNERS, allSpanners);
}

We’re loading each spanner individually from the database rather than loading the whole lot in a single query.

XRebel can also see exceptions thrown by the application even if they don’t bubble up to the front end.  For example, if I try to open the detailSpanner page for a spanner id that’s not in the database, a blank screen is shown. XRebel shows why:

xrebel exception

Here’s the offending code:

@RequestMapping(value = "/detailSpanner", method = RequestMethod.GET)
public ModelAndView displayDetail(@RequestParam int id) throws SpannerNotFoundException {

	// Fetch the spanner
	Spanner spanner = spannersDAO.get(id);

	// XRebel demo - cause a NPE when spanner is null
	System.out.println("Spanner retrieved: " + spanner.toString());

	return new ModelAndView(VIEW_DETAIL_SPANNER, MODEL_SPANNER, spanner);
}

 Why use XRebel

XRebel is not a substitute for full stack performance testing. It won’t catch many errors that can be seen only on an end-to-end integration test environment or on a full data set environment. However, leaving XRebel running on a development environment will very quickly highlight silly mistakes at exactly the time when they’re cheapest to fix. XRebel can be used for the same reason as unit testing is performed – it is far cheaper to fix mistakes as the code is being worked on than it is to fix them when they get to test or production environments.

Database query strategies and Hibernate queries in particular are so easy to get wrong. Put the wrong query inside the wrong loop and the database will be queried 100 times instead of 10, maybe 10000 times instead of 100. XRebel gives an instant view into this kind of mistake. It also allows developers to make informed decisions about what to optimise or cache based on real data, not guesswork.

In a complex system it’s simply impossible to know what the application is doing on every request unless you go in and look. In the past this has involved complex APM tools or custom instrumentation. Often, this sort of analysis is just skipped altogether. With XRebel it’s now very easy.

Building Unit Test Data

$
0
0

Unit tests have most value when they’re easy to read and understand. Unit tests typically follow a very straightforward pattern:

  1. Simulate system state
  2. Call the method under test
  3. Verify the method’s result and side effects

So long as this pattern is obvious in the test, the test is readable.

@Test
public void testDeleteSpanner() throws Exception {

	// 1. Simulate system state - DAO returns a spanner when requested
	when(spannersDAO.get(SPANNER_ID)).thenReturn(SPANNER);

	// 2. Call method under test: controller.deleteSpanner
	ModelAndView response = controller.deleteSpanner(SPANNER_ID);

	// 3(a). Verify method side effect - spanner is deleted via DAO
	verify(spannersDAO).delete(SPANNER);

	// 3(b). Verify method result - controller forwards to display spanners page
	assertEquals("view name", VIEW_DISPLAY_SPANNERS, response.getViewName());
}

Three or four line test methods are succinct, focussed and readable. However, simulating the system state often requires creation of complicated stub objects. This can result in long test methods where most of the test is setup followed by one or two lines of verification.

Factoring out object creation

Where possible, it’s best to factor out any hard setup work into separate methods. If it’s not possible to create the necessary stub data in one or two lines, make a private helper method just to create the stub data.

BEFORE:

@Test (expected = IllegalArgumentException.class)
public void testZeroSizeSpanner() {

	Spanner spanner = new Spanner();
	spanner.setId(1); // Better set an id. Not necessary for the test but every spanner should have an id.
	spanner.setName("Bertha"); // Better set a name too
	spanner.setOwner("Mr Smith"); // Again, we're not testing this, but every spanner should have an owner
	spanner.setSize(0); // This is the important bit! The important attribute of this spanner is that its size is zero!

	spannersDAO.create(spanner);
}

AFTER:

@Test (expected = IllegalArgumentException.class)
public void testZeroSizeSpanner() {

	// Create a spanner with zero size
	Spanner spanner = zeroSizeSpanner();
	spannersDAO.create(spanner);	
}

/**
 * Kate
 * @return A new Spanner with zero size
 */
private Spanner zeroSizeSpanner() {
	Spanner spanner = new Spanner();
	spanner.setId(1);
	spanner.setName("Bertha");
	spanner.setOwner("Mr Smith");
	spanner.setSize(0);
	return spanner;
}

This makes simpler test methods but may result in a glut of private helper methods to create every conceivable variation of test data.

Builder Pattern

In the  book Growing Object-Oriented Software, Guided by Tests, authors Steve Freeman and Nat Pryce suggest a neat pattern for cleanly creating test data for unit tests. They suggest using the builder pattern to build test objects which are as simple or as complicated as necessary for the test. The builder can set default data in fields meaning that only data significant to the result of the test needs to be set.

public class SpannerBuilder {

	//Making default values public can be useful for test assertions
	public static final int DEFAULT_ID = 1;
	public static final String DEFAULT_NAME = "Bertha";
	public static final int DEFAULT_SIZE = 16;
	public static final String DEFAULT_OWNER = "Mr Smith";

	// Fields all have default values. We only need to call the setters if we want different values
	private int id = DEFAULT_ID;
	private String name = DEFAULT_NAME;
	private int size = DEFAULT_SIZE;
	private String owner = DEFAULT_OWNER;

	public SpannerBuilder setId(int id) {
		this.id = id;
		return this;
	}

	public SpannerBuilder setName(String name) {
		this.name = name;
		return this;
	}

	public SpannerBuilder setSize(int size) {
		this.size = size;
		return this;
	}

	public SpannerBuilder setOwner(String owner) {
		this.owner = owner;
		return this;
	}

	public Spanner buildSpanner() {
		Spanner spanner = new Spanner();
		spanner.setId(this.id);
		spanner.setName(this.name);
		spanner.setSize(this.size);
		spanner.setOwner(this.owner);
		return spanner;
	}
}

Notice that the setter methods return this. This allows method chaining. So we can now create our zero sized test spanner like so:

@Test (expected = IllegalArgumentException.class)
public void testZeroSizeSpanner() {

	// Create a spanner with zero size
	Spanner zeroSizeSpanner = new SpannerBuilder().setSize(0).buildSpanner();
	spannersDAO.create(zeroSizeSpanner);
}

When we use the SpannerBuilder, we don’t need to worry about setting an id, name or owner for the Spanner. If we don’t define an attribute, the spanner will use its default.

Improving readability

We can make a couple of improvements to the syntax of the builder to make it more like a domain specific language (DSL). First, create one or more static factory methods to create the builder instance rather than using the new keyword to create the builder instance. Second, change ‘setter’ method names to words that make the chained calls read more like an English sentence. Like this:

Spanner hazell = aTestSpanner().named("Hazell").ownedBy("Mr Smith").build();

 

 

Testing with mock users in Spring / Spring MVC

$
0
0

A common unit test scenario for Spring / Spring MVC applications is to verify behavior when logged in as a particular user. The new spring-security-test library available with Spring Security version 4 makes testing user access controls in Spring and Spring MVC applications far simpler.

Testing method level security

Testing method level security annotations used to require manually creating an Authentication object and setting it in the test’s SecurityContext. This is described in a previous post on Protecting Service Methods with Spring Security Annotations. It’s relatively straightforward to do this but it does clutter the test somewhat. We want to test what happens when a user is logged in and not concern ourselves with how to log the user in.

@Test
public void testViewerAccess() {
 
    // Login as viewer by creating an Authentication and setting it in the SecurityContext
    SecurityContextHolder.getContext().setAuthentication(new UsernamePasswordAuthenticationToken("viewer", "password"));
 
    // Viewer should have access to get* methods - just call the method to check no exception is thrown
    spannersDAO.get(1);
    spannersDAO.getAll();
 
    // Viewer should not have access to create / update / delete
    Spanner spanner = newSpanner();
    verifyException(spannersDAO, AccessDeniedException.class).create(spanner);
    verifyException(spannersDAO, AccessDeniedException.class).update(spanner);
    verifyException(spannersDAO, AccessDeniedException.class).delete(spanner);
}

The @WithMockUser annotation in spring-security-test allows us run a test as if we’re logged in as a user just by annotating the test method:

@Test
@WithMockUser(roles=ROLE_VIEWER)
public void testViewerAccess() {

	// Viewer should have access to get* methods - just call the method to check no exception is thrown
	spannersDAO.get(1);
	spannersDAO.getAll();

	// Viewer should not have access to create / update / delete
	Spanner spanner = newSpanner();
	verifyException(spannersDAO, AccessDeniedException.class).create(spanner);
	verifyException(spannersDAO, AccessDeniedException.class).update(spanner);
	verifyException(spannersDAO, AccessDeniedException.class).delete(spanner);
}

In this case, I want my test user to have ROLE_VIEWER. The user’s name and password are not important to this test case so I don’t need to specify them.

Testing controller access via URLs

In Spring Security, access to controllers can be restricted based on the controller’s URL mapping:

<http auto-config="true" disable-url-rewriting="true" use-expressions="true">
	<intercept-url pattern="/" access="permitAll" />
	<intercept-url pattern="/resources/**" access="permitAll" />
	<intercept-url pattern="/signin" access="permitAll" />
	<intercept-url pattern="/admin/**" access="hasRole('ROLE_ADMIN')" />
	<intercept-url pattern="/**" access="isAuthenticated()" />
</http>

In this security context definition, I want the signin page to be available to everyone, everything on /admin/ to be restricted to those with the ADMIN role and all other pages to be available to any logged in (isAuthenticated()) user. MockMvc can be used to call Spring MVC controllers and test whether or not they are allowed for a particular user.

In the simplest case, I can assert that the signin page is available to users who have not yet logged in:

@Test
public void testSigninIsAvailableToAnonymous() throws Exception {
	mockMvc.perform(get(SigninController.CONTROLLER_URL))
			.andExpect(status().isOk());
}

A more complicated test case would be to verify that the SwitchUser page (on /admin/switchUser) is available only to users who have logged in and have the correct role. Again, the new @WithMockUser annotation can be used to simply run the test as if a given user was logged in:

@Test
@WithMockUser(roles=ROLE_ADMIN)
public void testAdminPathIsAvailableToAdminRole() throws Exception {
	mockMvc.perform(get(SwitchUserController.CONTROLLER_URL))
			.andExpect(status().isOk()); // Expect that ADMIN users can access this page
}

@Test
@WithMockUser(roles=ROLE_VIEWER)
public void testAdminPathIsNotAvailableToViewer() throws Exception {
	mockMvc.perform(get(SwitchUserController.CONTROLLER_URL))
			.andExpect(status().isForbidden()); // Expect that VIEWER users are forbidden from accessing this page
}

 Setting up MockMVC against the Spring Security Context

The above tests require MockMVC to be started against the Spring Web Application Context and the Security Context. Again, this has been simplified in spring-security-test version 4. Perviously, the Spring Security Filter chain had to be injected into the test class and then added to the MockMvcBuilder:

@Autowired protected WebApplicationContext wac;
@Autowired private FilterChainProxy springSecurityFilterChain; // Inject the filter chain created by Spring Security
protected MockMvc mockMvc;

@Before
public void setup() throws Exception {
			
	// Wire up Spring MVC context AND spring security filter
	mockMvc = webAppContextSetup(wac)
				.addFilters(springSecurityFilterChain) // Add the injected springSecurityFilterChain
				.build();
}

A new static method now exists to do this for you – SecurityMockMvcConfigurers.springSecurity(). This simplifies the creation of the MockMvc object a little:

@Autowired protected WebApplicationContext wac;
protected MockMvc mockMvc;

@Before
public void setup() throws Exception {
	// Set up a mock MVC tester based on the web application context and spring security context
	mockMvc = webAppContextSetup(wac)
					.apply(springSecurity()) // This finds the Spring Security filter chain and adds it for you
					.build(); 
}

Further information

The Spring Security Reference lists additional new features in the spring-security-test package including enhancements for testing method level security and Spring MVC security. In addition, a series of preview blog posts from Spring demonstrate testing method level security, testing Spring MVC and testing with HtmlUnit.

User Impersonation with Spring Security

$
0
0

A common requirement for secured applications is that admin / super users are able to login as any other user. For example, it may be helpful for a customer support analyst to access a system as if they were a specific real customer. The obvious way to do this is for the admin user to ask for the customer’s password or look it up in the password database. This is usually an unacceptable security compromise – no one should know a customer’s password except for the customer. And if the password database is implemented correctly it should be technically impossible for anyone – not even a system admin or DBA – to  discover a user’s password.

An alternative solution is to allow admin users to login with their own unique username and password and then allow them to impersonate any other user. After the admin user has logged in, they can enter the username of another user (no need for their password) and then view the application as if they had logged in as that user. Implementing user impersonation in this way also has the advantage that the system knows who has really logged in. If the system has an audit log, we can audit actions against the real admin user, rather than the impersonated user.

Implementing a user impersonation feature from scratch would be tricky and possibly introduce vulnerabilities to the system. Fortunately, this feature is available in Spring Security.

The following example is taken from the current release of the Spanners demo application, available on GitHub.

SwitchUserFilter

The starting point for user impersonation in Spring Security is the SwitchUserFilter. This Filter creates a URL that can be used to update the SecurityContext so that a different user is logged in. Here’s an example user switch URL:

http://localhost:8080/spanners-mvc/admin/impersonate?username=jones

The

/admin/impersonate
URL was configured by adding the following to the application’s
spring-security-context.xml
:
<http auto-config="true" disable-url-rewriting="true" use-expressions="true">

    <!-- Enable user switching - admin users may view the site as another user -->
    <custom-filter position="SWITCH_USER_FILTER" ref="switchUserProcessingFilter" />
</http>

<beans:bean id="switchUserProcessingFilter" class="org.springframework.security.web.authentication.switchuser.SwitchUserFilter">
    <beans:property name="userDetailsService" ref="userDetailsService"/>
    <beans:property name="switchUserUrl" value="/admin/impersonate"/>
    <beans:property name="targetUrl" value="/displaySpanners"/>
    <beans:property name="switchFailureUrl" value="/admin/switchUser"/>
</beans:bean>

The

switchUserProcessingFilter
is set up with the following settings:
  1. A reference to the
    userDetailsService
    bean, configured by Spring, is injected
  2. The
    switchUserUrl
    is set to
    /admin/impersonate
    . Any requests to
    /admin/impersonate
    will be handled by the
    SwitchUserFilter
    .
  3. The
    targetUrl
    is the first page shown on a successful user switch
  4. The
    switchFailureUrl
    is shown on failure to switch user. I prefer to have this go back to the switch user form (see below) rather than show a dedicated error page.

The filter is configured in addition to the filters configured automatically by Spring Security (using

<http auto-config="true">
) at the position ‘
SWITCH_USER_FILTER
’.

Switch User form

With the above security configuration, it is possible for ADMIN users to type in a URL that switches them to another user’s profile. Rather than access the switch user URL directly though, it would be easier to submit a form.

Switch User Form

There’s no need for a Spring Controller to handle the POST submission of this form. It can just submit a GET request directly to the switch filter URL:

<form method="GET" action="<c:url value="/admin/impersonate"/>" class="form">
    <label for="usernameField">User name:</label>
    <input type="text" name="username" id="usernameField" />
    <input type="submit" value="Switch User" />
</form>

Securing the form and filter URL

If no additional security rules are configured then any user could access the user switch page or the processing filter. This would allow any user of the application to be able to impersonate any other user. Both the form page and the filter URL should be protected so that only users with ADMIN role can access them:

<intercept-url pattern="/admin/**" access="hasRole('ROLE_ADMIN')" />

If any other user attempts to access the form or the processing filter URL, they’ll get an HTTP 403 Forbidden error. It is absolutely vital to secure the configured switchUserUrl in this way to prevent ordinary users from accessing this functionality.

Who am I? No, who am I really?

Once we’ve switched to another user using this mechanism, the Authentication object in the SecurityContext is that of the switched user. If you query the current user’s name, permissions or roles, you’ll get those of the switched user, not those of the ADMIN user who actually logged in. In some cases, we need to check details of the real user. These are stored as an additional GrantedAuthority of the switched user. An instance of SwitchUserGrantedAuthority signifies that the current user is being impersonated. The original ADMIN user can be retrieved via SwitchUserGrantedAuthority.getSource().

The SwitchUserGrantedAuthority always corresponds to a role named ‘ROLE_PREVIOUS_ADMINISTRATOR’, as defined in SwitchUserFilter. This allows us to easily grant access to ADMIN users even if they’re currently impersonating another user – like this:

<intercept-url pattern="/admin/**" access="hasRole('ROLE_ADMIN') or hasRole('ROLE_PREVIOUS_ADMINISTRATOR')" />

Testing access rules

It’s worth adding unit tests around expected access rules. In particular, we should verify that non-admin users can never access the user switch page. This can easily be tested using the @WithMockUser annotation with the Spring MVC Test Framework, as detailed in a previous post. As an example, this test verifies that ADMIN users can access the restricted SwitchUserController URL, even if they’re currently impersonating a user with the EDITOR role.

@Test
@WithMockUser(roles={ROLE_PREVIOUS_ADMINISTRATOR, // User logged in as ADMIN...
					 ROLE_EDITOR}) //...but is currently viewing as an EDITOR
public void testAdminPathIsAvailableToAdminUserSwitchedToViewer() throws Exception {
	mockMvc.perform(get(SwitchUserController.CONTROLLER_URL))
			.andExpect(status().isOk());
}

 

Docker Part 1: Running Containers

$
0
0

Docker is a containerization technology that’s been getting quite a bit of attention over the last year or two. It offers a more lightweight, flexible and repeatable alternative to creating and running full Virtual Machines (VMs). In this, the first in a series of posts on Docker, I’ll look at how to run an application inside of a pre-built container image. In this series, I’ll look at:

  1. Running Containers (this post);
  2. Building Images: How to create a new container image, customized to your requirements;
  3. Disposable Containers: Using containers to run a short-lived job rather than a long-lived service;
  4. Composing an Environment Stack: Creating an environment composed of multiple linked containers.

What is Docker?

Docker allows you to build and run containers for applications. A container can usually be thought of as if it were a full Virtual Machine. It has most of the attributes of a VM: an IP address, a network connection and a file system. Applications can be loaded into and run from the container. However, a Docker container is not a full VM. It’s just an abstraction of the bits we’re interested in. The low level stuff in the operating system kernel is shared with the host system and with other containers. This means that containers are much lighter than full VMs. They’re quicker to start and are more memory efficient.

This not only means that we can cram more containers into our available hardware, it means that containers are so cheap to run that they can be considered disposable. With Docker, we can start up a container just to run a single job and then dispose of the container when it’s done. If a service is under heavy load, we can create additional containers to service the additional requests and then dispose of them when the load settles down. With Docker, we can move away from the idea of permanent infrastructure towards the idea of infrastructure on demand.

Getting Started

Docker has excellent instructions on installing Docker for Windows, Linux or Mac OS X. I’ve chosen to install Docker on Linux rather than my usual Windows environment because:

  1. A bug in the current Windows package causing it to fail on startup (solution was to upgrade to the latest VirtualBox)
  2. Docker is a native Linux application. The Windows / Mac OS X installations just use VirtualBox to create a Linux VM behind the scenes. I figured it would be simpler just running natively from Ubuntu.

Running the Spanners demo in Docker

As usual, I’m using the Spanners demo app as a basis for this demonstration. This application usually runs in Tomcat 7 with a MySQL 5.6 database, both installed on my Windows workstation. Obviously, the Dockerized version will run in Linux rather than Windows but should be otherwise identical.

I first considered finding a pre-built Docker image that had Java 8, Tomcat 7 and MySQL 5.6 installed and ready to go. However, this rather goes against the recommended design of a containerized stack. As the containers are lightweight, it makes sense to have each container do only one thing. So my environment design is to have a webserver container running Tomcat (with Java as a prerequisite) and a separate database container running MySQL.

Many vendors – including Apache (Tomcat) and Oracle (MySQL) – offer official Docker images. That is, a container that runs their product and nothing else. I’ve chosen to run the official Tomcat 7 (with Open JDK 8) and MySQL 5.6 images.

Setting up the database

The official MySQL 5.6 image is started with the following Docker command:

sudo docker run --name spanners-database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:5.6

Note that the command was run with sudo. The docker command requires root privileges so you must either login as root or run all commands with sudo.

If this is the first time this command is run, it will download the mysql:5.6 image from the official Docker repository at Docker Hub. Once it’s down, Docker will start up the container and run it in the background (the

-d
switch signifies that it is to be run detached, not interactively). As it’s running detached, there’s very little to show once its up but we can run commands against it:
sudo docker exec -i spanners-database mysql --password=my-secret-pw < spanners-database/initialize_db.sql

This command tells docker to execute a command against the spanners-database container. In this case, the command is to have mysql run the

initialize_db.sql
script. This script sets up the spanners schema and creates a user for the webserver to connect.
CREATE SCHEMA `spanners` ;

USE `spanners`;

delimiter $$

CREATE TABLE `spanner` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  `size` int(11) default NULL,
  `owner` varchar(255) default NULL,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1$$

GRANT ALL PRIVILEGES ON spanners.* TO "spanners"@"%" IDENTIFIED BY "password";

Setting up the webserver

The official Tomcat 7 image is started like so:

sudo docker run --name spanners-webserver --link spanners-database:spanners-database -p 8080:8080 -d tomcat:7-jre8

The

--link
switch is to link this container to the existing spanners-database container. This essentially creates a network connection between the two containers and allows the webserver to communicate with the database. The
-p
switch opens up a connection to a port in the new container. Port 8080 on the container will be served as port 8080 on the host machine. This allows us to access the running Tomcat instance at http://localhost:8080. Again
-d
signifies that we’re running detached, not interactively.

Again, this will be slow the first time it’s run as the container image is downloaded from Docker Hub. Once it has downloaded and started, browsing to http://localhost:8080 shows the usual Tomcat home page:

tomcat home

This is a fresh instance of Tomcat with no configuration settings changed and no applications deployed. We need to copy the necessary config files and database driver for the Spanners app. This is done using the docker cp command which copies files from the host system into the container.

export CONTAINER_CATALINA_HOME=/usr/local/tomcat
sudo docker cp spanners-webserver/tomcat/context.xml spanners-webserver:$CONTAINER_CATALINA_HOME/conf/context.xml
sudo docker cp spanners-webserver/tomcat/tomcat-users.xml spanners-webserver:$CONTAINER_CATALINA_HOME/conf/tomcat-users.xml
sudo docker cp spanners-webserver/tomcat/mysql-connector-java-5.1.36-bin.jar spanners-webserver:$CONTAINER_CATALINA_HOME/lib/mysql-connector-java-5.1.36-bin.jar

We then need to restart Tomcat (restarting the container will do this):

sudo docker stop spanners-webserver
sudo docker start spanners-webserver

Finally, we need to deploy the Spanners-MVC application. This could also be done using the docker cp command to copy the war to the Tomcat webapp directory. I prefer to have Maven deploy the application as described in a previous post:

mvn tomcat7:deploy-only -pl :spanners-mvc

Now, if we navigate to http://localhost:8080/spanners-mvc/ the Spanners-MVC landing page is shown:

spanners home

Is that it?

We’ve used Docker to run an instance of MySQL and an instance of Tomcat. We’ve then manually configured these instances to run our application. Using Docker in this way does not offer much advantage over simply installing MySQL and Tomcat to the host machine. Docker’s power comes from the ability to modify container definitions (images) so that they start already configured exactly the way we want.

In the next article in this series, we’ll look at building Docker containers from our own customized images and how this simplifies our set-up work.

Docker Part 2: Building Images

$
0
0

The previous post in this series on Docker looked at starting up containers built from predefined images provided by Docker Hub. In this, the second in the series, I’ll look at creating customized images tailored to my specific requirements. I’ll also look at how my custom image can be pushed to Docker Hub for others to use.

To recap, this series covers:

  1. Running Containers: Installing Docker and starting containers;
  2. Building Images (this post);
  3. Disposable Containers: Using containers to run a short-lived job rather than a long-lived service;
  4. Composing an Environment Stack: Creating an environment composed of multiple linked containers.

Containers and Images

A Docker Container is the equivalent of a Virtual Machine (VM). It has a file system and a network connection. Services and jobs can be run from a container. Containers can be started, stopped, inspected and networked to other containers.

A Docker Image is a snapshot of a container. It defines the state of a container at a given time – usually the initial state. From this snapshot or image, a container can be started. Indeed, one image could be used to start up multiple containers.

The previous post in this series focused on running containers. This one focuses on defining images to create the containers.

There are two ways to create a new Docker image. The first is to take a snapshot of a running container using the docker commit command. The second is to create a Dockerfile which contains a list of instructions required to create the image. This is usually preferable to manually configuring a container and then taking a snapshot as it is repeatable and the Dockerfile is human readable. This post will cover only creation from a Dockerfile and not the

docker commit
command.

Database Image

In the previous post, we created a MySQL database container by starting the official MySQL 5.6 image and then performing a few manual steps to initialize it for our application. If we create a custom image though, we can start the container already initialized and ready to go. When the database starts up for the first time, we want the application’s schema already created and a user called ‘spanners’ ready for the application to connect as.

The Dockerfile looks like this:

FROM mysql:5.6

# Copy the database initialize script: 
# Contents of /docker-entrypoint-initdb.d are run on mysqld startup
ADD  docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/

# Default values for passwords and database name. Can be overridden on docker run
# ENV MYSQL_ROOT_PASSWORD=my-secret-pw # Not defaulted for security reasons!
ENV MYSQL_DATABASE=spanners
ENV MYSQL_USER=spanners
ENV MYSQL_PASSWORD=password

The first line tells Docker that our image should be based on the official MySQL 5.6 image. Everything that follows will be built on top of that image.

The next line copies the contents of the local

docker-entrypoint-initdb.d/
  directory into the containers
/docker-entrypoint-initdb.d/
  directory. When the MySQL container starts it will look for this directory and run any scripts inside it. We can use this to initialize the schema for our application. Our schema creation script looks like this:
USE `spanners`;

delimiter $$

CREATE TABLE `spanner` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  `size` int(11) default NULL,
  `owner` varchar(255) default NULL,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1$$

Finally, three environment variables are created in the container using the

ENV
  keyword. These are used to have the MySQL container create a database schema, username and password. Again, the official MySQL image takes care of this for us. It is also possible to set the root password in this way but I’ve chosen not to.

Webserver Image

In the previous post, we created a webserver image by starting the official Tomcat 7 image and then copying in some config files. Again, we can create a custom image so that the container starts up with everything it needs.

FROM tomcat:7-jre8
MAINTAINER Stuart 'Stevie' Leitch <hotblack@disasterarea.org.uk>

# context.xml contains jndi connection to spanners-database
ADD tomcat/context.xml $CATALINA_HOME/conf/

# tomcat-users.xml sets up user accounts for the Tomcat manager GUI
# and script access for Maven deployments
ADD tomcat/tomcat-users.xml $CATALINA_HOME/conf/

# MySQL driver jar
ADD tomcat/mysql-connector-java-5.1.36-bin.jar $CATALINA_HOME/lib/

# Install spanners-mvc war from Maven repo
RUN wget http://www.disasterarea.co.uk/maven/org/dontpanic/spanners-mvc/3.2/spanners-mvc-3.2.war -O /usr/local/tomcat/webapps/spanners-mvc.war

The first line here tells Docker that this image is based on the official Tomcat 7 (with JRE 8) image. The following three

ADD
  commands add a
context.xml
  file,
tomcat-users.xml
  and the MySQL driver jar into the container’s Tomcat directory.

Finally, we want to

RUN
  the
wget
  command to download the application war and place it in Tomcat’s webapps directory. This means that when the container starts up Tomcat, the Spanners-MVC application will already be deployed.

Building and running the images

Dockerfiles must be built using the docker build command before the images can be run. The following commands build the database and webserver images:

sudo docker build -t spanners-database .
sudo docker build -t spanners-webserver .

The

-t
  switch tags the image with a name (spanners-database / spanners-webserver) and the dot at the end tells docker to build the Dockerfile in the current directory.

The two images are now ready to run:

sudo docker run --name spanners-database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d spanners-database
sudo docker run --name spanners-webserver --link spanners-database:spanners-database -p 8080:8080 -d spanners-webserver

The switches used are exactly the same as the ones we used to start the official MySQL / Tomcat images as described in part 1 of this series.

Publishing images to Docker Hub

The two images just built exist only on the machine that built them. The can be shared with other Docker users by publishing them to Docker Hub or some other Docker repo. One way to do this is to use the docker push command as described in the Docker Tutorial.

It’s also possible to automatically build and publish images whenever its definition is updated. This can be done by creating an Automated Build on Docker Hub. In this way, a Docker Hub (image) repository can be linked to a GitHub or Bitbucket (source code) repository. Whenever the source Dockerfile is updated in GitHub / Bitbucket, Docker Hub will build the updated image.

The source files for the Spanners containers are available on my GitHub repo. The associated automated builds are on my Docker Hub repo.

Running the published container images

The container images described in this tutorial are available from my Docker Hub account (hotblac) and have been tagged with a version number (3.2). They can be run directly from the Docker Hub automated build by referring to them by their full name:

sudo docker run --name spanners-database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d hotblac/spanners-database:3.2
sudo docker run --name spanners-webserver --link spanners-database:spanners-database -p 8080:8080 -d hotblac/spanners-webserver:3.2

This starts to show the power of Docker. From any machine running Docker, it is possible to run the Spanners-MVC web application using just these two commands. It is not necessary to manually install a database or webserver on the host machine. It’s also not necessary to download and build the application source code or even the Docker image definitions. The whole application and its environment stack is available – already built – from Docker Hub. This means that the application can be started quickly and consistently on any developer workstation, on any physical server or on cloud based infrastructure.

In the next article in the series, we’ll look at using ‘disposable’ Docker containers to run a short lived jobs, specifically building and deploying the application from source code.

 


Docker Part 3: Disposable Containers

$
0
0

The previous posts in this series on Docker have looked at using containers to run services, specifically a web server and database server. However, Docker allows containers to be created, run, stopped and destroyed so cheaply that they can be used to run a single job. This job could be a script or even a single command. Unlike a service, a job will stop running when it’s complete. A container running a short lived job can be set to automatically stop and remove itself once the job is complete. If the job needs to be run again, it is reasonably efficient for Docker to start up a brand new container as required.

To recap, this series covers:

  1. Running Containers: Installing Docker and starting containers;
  2. Building Images: How to create a new container image, customized to your requirements;
  3. Disposable Containers (this post);
  4. Composing an Environment Stack: Creating an environment composed of multiple linked containers.

Maven build job

This example will look at a simple job: to build and deploy the Spanners demo app from source. The application is built in Maven and deployed to a Tomcat webserver.

We can build and deploy the application using a single Maven command:

mvn clean install tomcat7:redeploy

The clean and install goals make a fresh build and install the built artifacts to the local repository. The tomcat7:redeploy goal takes a built war and deploys it into a running Tomcat instance, as described in a previous article on Deploying to Tomcat 7 with Maven.

The Docker Container

All we need to have Docker run the Maven build is to run the build command in a container that has Maven installed. Once again, we can use an official vendor image on Docker Hub as a start. The official Maven image defines a container that has Maven (and Java) installed. We can extend this image so that the build job is started when the container starts. The Dockerfile for this image is fairly simple:

FROM maven:3.3.3-jdk-8

MAINTAINER Stuart 'Stevie' Leitch <hotblack@disasterarea.org.uk>

# Copy settings.xml
ADD settings.xml $MAVEN_HOME/conf/settings.xml

# Build and deploy to the spanners-webserver container
CMD ["mvn", "clean", "install", "tomcat7:redeploy", "-Pdockerbuild"]

The

settings.xml
  file copied into the container (using the
ADD
  command) just contains connection details for the target webserver:
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">

  <servers>
    <server>
        <id>docker-webserver</id>
        <username>admin</username>
        <password>admin</password>
    </server>
  </servers>
    
</settings>

CMD
  is used to execute the Maven command when the container starts. Notice that the
-P
  flag is used to specify a Maven profile to activate. The
dockerbuild
  profile is defined in the project’s pom and specifies the target webserver for the tomcat7 plugin:
<profile>
    <id>dockerbuild</id>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.tomcat.maven</groupId>
                <artifactId>tomcat7-maven-plugin</artifactId>
		<configuration>
		    <server>docker-webserver</server>
		    <url>http://docker-webserver:8080/manager/text</url>
		</configuration>
            </plugin>
        </plugins>
    </build>
</profile>

These settings tell the tomcat7 plugin to deploy to a machine called docker-webserver (port 8080) using the username and password specified in the

settings.xml
  under server id docker-webserver.

Running the build from a container

This container is designed to build source code from the host machine and then deploy it to a machine called docker-webserver. The container assumes that the source code already exists and that the webserver (and  corresponding database server) are already running.

The source code of the Spanners demo app can be downloaded from GitHub with the following command:

git clone https://github.com/hotblac/spanners.git $HOME/spanners-latest

The Dockerized database and webservers can be started as described in the previous article:

sudo docker run --name spanners-database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d hotblac/spanners-database:3.2
sudo docker run --name spanners-webserver --link spanners-database:spanners-database -p 8080:8080 -d hotblac/spanners-webserver:3.2

The container image for this example has been configured as an automated build at Docker Hub with the name hotblac/spanners-builder. This means it can be downloaded and run from the docker command line without having to manually build it on the target machine. It can be run from any internet connected machine with Docker installed with the following command:

sudo docker run --name spanners-builder --link spanners-webserver:docker-webserver -v $HOME/spanners-latest:$HOME/container-build-dir -w $HOME/container-build-dir -it --rm hotblac/spanners-builder

This command contains a number of flags to set up the container correctly:

  • --name spanners-builder
     : Assigns a memorable name to this container
  • --link spanners-webserver:docker-webserver
    : Creates a virtual network connection between this container and the spanners-webserver container. The link is aliased as docker-webserver meaning that this container can access the webserver if it were a physical machine called docker-webserver. This link is required so that Maven has a running Tomcat webserver in which to deploy the built application.
  • -v $HOME/spanners-latest:$HOME/container-build-dir
     : Sets up a volume at
    $HOME/container-build-dir
      on the container, pointing to
    $HOME/spanners-latest
      on the host machine.
    $HOME/spanners-latest
      is where we told git to clone our source code into. This allows the container to access the downloaded source code on the host machine.
  • -w $HOME/container-build-dir
     : Sets
    $HOME/container-build-dir
      as the current working directory of the container. This is the directory containing the source code to be built. When Maven starts, it will run from this directory and build the code.
  • -it
     : This is a shorthand for specifying the two single letter switches
    -i
    and
    -t
     . These flags together allow a container to run interactively (rather than detached in the background) and connect the container’s input and output to the host shell.
  • --rm
     : Automatically removes the container when it exits. This makes our container truly disposable. When it completes, it will remove itself from Docker.

When this rather lengthy command is run, Docker will start the hotblac/spanners-builder container, downloading it from Docker Hub first if necessary. The container will run a single Maven command to build the source code on the host machine and then deploy the resulting war into the spanners-webserver container. On completion of the Maven build, the container will stop and remove itself.

If we want to build the code again later, we can run the same Docker command to start a brand new instance of the container.

So what?

This example demonstrates how Docker could be used to efficiently and consistently run any job. The advantage of running a job in a fresh container is that the container starts in a consistent state. In this example of a Maven build job, the fresh container ensures that the build runs correctly from any machine and that it’s not dependent on any artifacts or settings on the developer’s workstation. This meets a requirement of a Continuous Integration (CI) build server. In addition, any number of container instances can run simultaneously, enabling consistent parallel builds.

This principle can be expanded to any finite job: backup processing, application deployment, reporting and so on. Running a job within a Docker container is advantageous because:

  • It’s not necessary to have a physical server waiting idle for a job to start. The job can be started with its container only when it’s needed.
  • Jobs run in disposable containers are isolated from each other and across multiple runs of the same job.
  • Jobs can very easily be moved from one physical server to another. Specifically, the same container can be used to run the job in a development environment, on a test server and in production.

In the next and final article in this series we’ll look in more detail at how individual Docker containers can be composed into a complete environment stack.

Docker Part 4: Composing an Environment Stack

$
0
0

This series of articles on Docker has so far covered a number of examples of creating and running individual Docker containers. We’ve also seen an example of how multiple Docker containers can be linked together using the

--link
  command line flag.

Best practice for containerization suggests that each container does exactly one job. A full environment stack for a complex application may comprise many components – databases, web applications, web/micro services – each requiring its own container. Setting up the full working environment stack may require several lines of

docker run
commands, run in the right order, with just the right flags and switches set.

An obvious way to manage this is with a startup script. A neater solution is to use Docker Compose. Docker Compose allows multi-container applications to be defined in a single file and then started from a single command.

To recap, this series covers:

  1. Running Containers: Installing Docker and starting containers;
  2. Building Images: How to create a new container image, customized to your requirements;
  3. Disposable Containers: Using containers to run a short-lived job rather than a long-lived service;
  4. Composing an Environment Stack: (this post).

The Spanners application stack

It will come as no surprise to regular readers that I’ll use the Spanners demo application to demonstrate Docker Compose. Spanners requires a near trivial application stack – it’s a simple webserver / database application. The Dockerized containers for the Spanners demo application are available on Docker Hub. Part 2 in this series describes how to run them using the docker run command:

sudo docker run --name spanners-database -e MYSQL_ROOT_PASSWORD=my-secret-pw -d hotblac/spanners-database:3.2
sudo docker run --name spanners-webserver --link spanners-database:spanners-database -p 8080:8080 -d hotblac/spanners-webserver:3.2

This demonstrates a shortcoming in Docker. Each image is near trivial – the Dockerfile for each is just a few lines long. The full application stack is also simple – just two servers linked together. But the commands to run these trivial container images in a simple way are quite complex. There are just too many command line switches. In a more realistic environment stack consisting of a dozen or more containers, this would be unmanageable.

Docker Compose exists to manage this complexity.

Composing the stack

Docker Compose allows multi-container applications to be defined in a YAML file. The file contains a definition of every container in the application and their switches / flags. It’s best illustrated with an example:

database:
  environment:
    MYSQL_ROOT_PASSWORD: my-secret-pw
  image: hotblac/spanners-database:3.2
webserver:
  links:
    - database:spanners-database 
  ports:
    - "8080:8080" 
  image: hotblac/spanners-webserver:3.2

This

docker-compose.yml
file defines two containers labelled
database
and
webserver
. The container image is defined for each (hotblac/spanners-database:3.2 and hotblac/spanners-webserver:3.2 respectively).

Specific settings that would normally be defined as command line switches of the

docker run
command can be defined in the
docker-compose.yml
file. In this case, the database container’s environment settings and the webserver’s link and port settings are defined. This makes the
docker-compose.yml
file equivalent to the two long
docker run
commands above.

Running the stack

All containers comprising the application can be started with a single simple command:

sudo docker-compose up -d

This command starts all containers defined in

./docker-compose.yml
. The
-d
flag here runs all containers detached – how we’d usually prefer service containers to run. Omitting this flag would start all containers interactively and all shell output of each container would be shown in the host shell.

The containers can be stopped just as easily:

sudo docker-compose stop

Additional docker-compose commands are available, but these two are usually all we need.

Running a containerized application on another machine

Docker Compose makes it trivial to run a full multi-container application on any machine that has Docker and Docker Compose installed. To demonstrate, here are the instructions for running the Spanners demo app. There’s no need to download the source code or install a webserver or database. You don’t even need Java.

  1. Download the 10 line Spanners docker-compose.yml file
  2. In the download directory, run
    sudo docker-compose up -d
  3. That’s it!

Docker Compose will start the database and webserver containers as defined in the compose file. Docker itself will download the images of these containers if necessary. Browsing to

http://localhost:8080/spanners-mvc/
will show the Spanners app welcome page (you can login as
jones
/
password
).

The potential benefits should be obvious. A complete working environment stack can be spun up on any developer workstation or server in a matter of seconds. At the moment, Docker Compose is not recommended for production environments. However, Docker Compose can be used to very cheaply create development and test environments. Indeed, one suggested use case is to create a ‘disposable’ CI environment in which build tests are run. When the build is complete, the environment is simply stopped and discarded.

 

Hashing and Salting passwords with Spring Security PasswordEncoder

$
0
0

A standard Spring Security configuration uses username / password based authentication. This always presents a tricky problem: how to securely store a user’s password in such a way that it can’t be read by anyone with access to our database. It’s naive to assume that our password database is 100% secure, just ask Adobe, Sony, Ashley Madison, and every other large organization that has had their database breached. Even if the database isn’t ‘breached’ or ‘leaked’, legitimate database admins or sys admins still have access to user passwords. A database containing user passwords is a liability that we’d rather not have.

The standard solution to this problem is store store a hash of the password rather than the plain text or even encrypted text. I don’t want to focus on why this is good or how it works as many others have done this already. I’ve found no better discussion of this (and password management in general) than Troy Hunt’s post on Everything you ever wanted to know about building a secure password reset feature.

Getting the details right when implementing password storage is critical. Some hash algorithms are vulnerable or just not suited to password hashing. If the salt is too short or predictable, it may be possible to retrieve the password from the hash. Any number of subtle bugs in coding could result in a password database that is vulnerable in one way or another. Fortunately, Spring Security includes password hashing out of the box. What’s more, since version 3.1, Spring Security automatically takes care of salting too.

The following example is available to download from GitHub in version 3.4 of the Spanners app.

Spring Security: database backed user authentication

For what it’s worth, I still prefer the old-skool XML namespace configuration over the newfangled Java configuration for Spring Security. I find it to be more readable. The following examples all use the Spring Security XML namespace but would work equally well using Java configuration.

Spring Security offers database backed user authentication allowing a username and password to be verified against a database table:

<authentication-manager>
	<authentication-provider user-service-ref="userDetailsService"/>
</authentication-manager>

<beans:bean id="userDetailsService" class="org.springframework.security.core.userdetails.jdbc.JdbcDaoImpl">
	<beans:property name="dataSource" ref="spannersDS"/>
</beans:bean>

This will verify a username / password against the

users
table of the database referred to via the
dataSource
property. The database is set up with the default schema as detailed in the JdbcDaoImpl documentation.

Configuring a Password Encoder

The BCrypt password encoder can be configured simply by having the

authentication-provider
refer to it:
<authentication-manager>
	<authentication-provider user-service-ref="userDetailsService">
		<password-encoder hash="bcrypt"/>
	</authentication-provider>
</authentication-manager>

In previous versions of Spring, it was necessary to specify a salt source too. As of version 3.1, all implementations of the new

PasswordEncoder
interface take care of salting too.

The above configuration uses the BCrypt password encoder with default settings. If you want to use a non-standard

PasswordEncoder
implementation, or if you need to refer to a
PasswordEncoder
  bean, this is also possible:
<authentication-manager>
	<authentication-provider user-service-ref="userDetailsService">
		<password-encoder ref="passwordEncoder"/>
	</authentication-provider>
</authentication-manager>

<beans:bean id="passwordEncoder" class="org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder"/>

User admin

The above configuration of the authentication manager, authentication provider and password encoder deal only with verifying a password on login. We also need some means of creating user accounts with hashed passwords. An obvious way to do this is with a self-service registration page that asks new users to choose a username and password:

sign up

The implementation behind this will create a new row in the users table with the given username and password. The

passwordEncoder
bean can be reused to create the hash of the password entered by the user. An instance of Spring’s JdbcUserDetailsManager can be used to create the account.
JdbcUserDetailsManager
is a subclass of
JdbcDaoImpl
(and obviously implements the UserDetailsService interface) so the same bean can be used by the authentication manager to lookup user details on authentication.
<authentication-manager>
	<authentication-provider user-service-ref="userDetailsManager">
		<password-encoder ref="passwordEncoder"/>
	</authentication-provider>
</authentication-manager>

<!-- UserDetailsManager acts as a UserDetailsService for the authentication provider
	 and also allows CRUD operations on users and groups. Backed by JDBC - the spanners database. -->
<beans:bean id="userDetailsManager" class="org.springframework.security.provisioning.JdbcUserDetailsManager">
	<beans:property name="dataSource" ref="spannersDS"/>
</beans:bean>

<beans:bean id="passwordEncoder" class="org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder"/>

The

userDetailsManager
and
passwordEncoder
beans used for authentication are injected (autowired) into the Spring MVC controller for the signup page, allowing account creation:
@Controller
public class SignupController {

    @Autowired private UserDetailsManager userDetailsManager;
    @Autowired private PasswordEncoder passwordEncoder;
    
    @RequestMapping(value = "/signup", method = RequestMethod.POST)
    public String signup(@Valid @ModelAttribute SignupForm signupForm, Errors errors) {
        if (errors.hasErrors()) {
            return null;
        }

        // Password should be stored hashed, not in plaintext
        String hashedPassword = passwordEncoder.encode(signupForm.getPassword());

        // Roles for new user
        Collection<? extends GrantedAuthority> roles = Arrays.asList(
                new SimpleGrantedAuthority("ROLE_VIEWER"),
                new SimpleGrantedAuthority("ROLE_EDITOR")
        );

        // Create the account
        UserDetails userDetails = new User(signupForm.getName(), hashedPassword, roles);
        userDetailsManager.createUser(userDetails);

        return "redirect:/";
    }
}

Which PasswordEncoder implementation / hashing algorithm to use?

Spring Security offers two implementations of the new PasswordEncoder interface – BCryptPasswordEncoder and the confusingly named StandardPasswordEncoder based on SHA-256. The BCrypt implementation is the recommended one. There’s also a NoOpPasswordEncoder which does no encoding. It’s intended for unit testing only.

Spring Security also has implementations based on MD4, MD5 and SHA-1 which implement a previous PasswordEncoder interface. This version of the interface is now marked as deprecated as the new one deals better with salting. Confusingly, the implementations are not specifically marked as deprecated. They should however be avoided as they are based on hashing algorithms now known to be insecure. Their only legitimate use is backwards compatibility with legacy applications.

BCrypt is widely accepted as a suitably secure password hashing function. Both OWASP and an excellent explainer on CrackStation recommend BCrypt. PBKDF2 and scrypt are also frequently mentioned as suitable candidates for password encoding but no standard implementation exists for either of these in Spring Security. While it is possible to create a custom implementation of PasswordEncoder using any algorithm, I’d usually recommend sticking to a provided implementation if at all possible.

WebSocket push notifications with Node.js

$
0
0

The Node.js website describes it as having “an event-driven, non-blocking I/O model that makes it lightweight and efficient”. Sounds lovely, but what’s it actually for?

Modulus’s excellent blog post – An Absolute Beginner’s Guide to Node.js provides some rather tasty examples. After covering the trivial examples (Hello world! and simple file I/O), it gets to the meat of what we’re about – an HTTP server. The simple example demonstrates a trivial HTTP server in Node.js in 5 lines of code. Not 5 lines of code compiled to an executable or deployed into an existing web server. 5 lines of code that can be run from a simple command. It then goes on to describe the frameworks and libraries that let you do really useful stuff.

This looks just the thing for implementing a new feature in the Spanners demo app: push notifications to all logged-in users when a spanner is changed.

The user story

The Spanners demo app is an application that manages an inventory of spanners. Every user can view a list of all spanners in the inventory (including the spanner’s name, size and owner) and can add new spanners or update or delete a spanner that they own. Mr Smith logs into the Spanners demo app and views a list of all spanners. Mr Jones also logs in and updates one of his spanners. Immediately after Mr Jones saves his change, Mr Smith is informed with a notification message on screen.

User notification

The technologies

Standard HTTP us a ‘pull’ technology. When a user clicks a link to a page in their browser, they make a request to the server and pull the response. In this example though, we’re looking for a ‘push’ notification. A number of technologies exist to allow a server to push notifications to the client browser without having the user request a page. I’ve chosen WebSocket. A WebSocket connection can be opened by the client browser when a page is opened. In this case the WebSocket connection is to a Node.js server. Node.js maintains a list of all active WebSocket connections. When our Node.js server is informed of an updated spanner, it broadcasts the message to all open WebSocket connections. This allows every active browser session to be informed of updates.

The solution – browser side

Lets start with the browser side. Making a WebSocket connection and listening for update requires only some very straightforward Javascript on the page:

<script type="text/javascript">
    var myWebSocket = new WebSocket("ws://localhost:9090");
    myWebSocket.onmessage = function(evt) {
        Msg.info("A spanner has been updated. Please refresh the page to see changes.");
    };
</script>

In this trivial example, we just display a message every time any message is sent through the socket. We could easily extend to read the (JSON) message received and display a more detailed message. This example assumes that our WebSocket server is running on localhost port 9090 (ws is the protocol).

The solution – Node.js

We require a server that listens for REST notifications from the Spanners server side application and then broadcasts the notification to all open WebSocket connections. In Node.js, this is remarkably simple:

var WebSocketServer = require('ws').Server
  , wss = new WebSocketServer({ port: 9090 });
var express = require('express');
var app = express();

// Start REST server on port 8081
var server = app.listen(8081, function () {
  var host = server.address().address
  var port = server.address().port
  console.log("Websocket event broadcaster REST API listening on http://%s:%s", host, port)
});

// Broadcast updates to all WebSocketServer clients
app.post('/notify/spanner/:spanner_id/update', function (req, res) {
   var spanner_id = req.params.spanner_id;
   console.log('Event: spanner %s updated', spanner_id);
   wss.clients.forEach(function each(client) {
      client.send("broadcast: spanner " + spanner_id + " updated");
    });
   res.sendStatus(200);
});

Everything up to line 11 is initialization. First, we define the two npm libraries we’ll use: ws for WebSockets and express for the REST service. Then we define the WebSocket server port as 9090 and the REST server port as 8081. We log a startup message on line 10 for good measure.

Line 13 onwards is where the action is. When our REST service receives a POST request to endpoint /notify/spanner/<spannerid>/update, it will send a message through the WebSocket connections for every currently connected client. The list of clients is maintained for us by the WebSocketServer library. We could send a full JSON message, but in this simple case, just a plain text string message is sent.

That’s it. That’s really it. The 21 lines here are a fully functioning HTTP (REST) and WebSocket server. Assuming that this is saved in a file called server.js, this can be run as

node server.js

The solution – Java application

All that’s left to do now is to have our Java application call the REST service to trigger the broadcasts. I quite like the REST support in Spring-web, so my solution looks like this:

<bean id="notifyUserEventListener" class="org.dontpanic.spanners.events.NotifyUserEventListener"
	p:notificationServiceUrl="http://localhost:8081/notify/spanner/42/update"
	p:restTemplate-ref="restTemplate"/>

<bean id="restTemplate" class="org.springframework.web.client.RestTemplate"/>

public class NotifyUserEventListener implements ApplicationListener<SpannerEvent> {

    @Override
    public void onApplicationEvent(SpannerEvent spannerEvent) {
        restTemplate.postForObject(notificationServiceUrl, Integer.toString(spannerEvent.getSpannerId()), String.class);
    }
}

Putting it all together

The Spanners demo app now consists of

  • A WAR webapp, running in Tomcat
  • A MySQL database
  • A (optional) Node.js server

Running all this directly on localhost is certainly possible but the steps to install and configure pre-requisites are now quite complicated. I prefer to run this using Docker. The latest Docker Compose file creates the necessary containers and runs the current version of the application.

If you want to run this application, install docker-compose, download the the current docker-compose.yml file and run

docker-compose up -d

then browse to http://localhost:8080/spanners-mvc/. If you want to view the complete source of this example, it’s on GitHub.

No code REST services with Spring Boot and Spring Data REST

$
0
0

CRUD REST services are the backbone of a microservice architecture. If we want to use microservices rather than monolithic applications, it’s essential that we can create a basic service with a minimum of effort. Spring Boot can be used to quickly create and deploy a new web service. Spring Data REST can be used to build out the REST interface based on a database entity model. Using both together allows us to create a running RESTful web service with zero custom Java code and no tricky XML.

This article describes how to build a RESTful web service as an executable JAR that provides CRUD operations against a single MySQL database table.

This demo can be downloaded from GitHub in the Spanners Demo Application version 4.0 (spanners-api module). You can run the working example as a docker-compose stack, along with the associated MySQL database and the Spring MVC web app that consumes the service (see the previous post on docker-compose for details on how to run this).

Spring Boot starters

The Spring Boot starter projects allow us to easily pull in the dependencies we need based on our broad requirements. It’s no longer necessary to explicitly define every library that we build on. Just pull in the Spring Boot starters for our application type and we’re good.

For a Spring Data REST project, we need the JPA starter (to configure our database mappings) and the Spring Data REST starter. We’ll most likely also want the test starter to allow us to write unit tests. The one dependency we need to explicitly define is our specific database driver (MySQL in my case).

If you dislike XML, this can be done with Gradle. I still prefer Maven so my pom.xml looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.3.5.RELEASE</version>
    </parent>

    <modelVersion>4.0.0</modelVersion>
    <groupId>spanners</groupId>
    <artifactId>spanners-api</artifactId>
    <version>4.0-SNAPSHOT</version>
    <packaging>jar</packaging>
    <name>Spanners REST API</name>

    <properties>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <!-- MySQL for production database connection -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

No need to specify version numbers for any of these. The Spring Boot parent manages all these. The Spring Boot Maven Plugin allows Maven to run Spring Boot applications directly, or to build an all in one executable jar so we can run our application from command line – no need to deploy to an application server.

Entity model and mapping

I have a database table that I want to model in my application:

CREATE TABLE `spanner` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  `size` int(11) default NULL,
  `owner` varchar(255) default NULL
)

This is mapped to a Java class using JPA annotations:

@Entity
public class Spanner {

    @Id
    @GeneratedValue(strategy= GenerationType.AUTO)
    private Long id;
    private String name;
    private int size;
    private String owner;
    
    // Getters and setters go here

}

Note that only the class and the id field are annotated. JPA is smart enough to correctly map the name, size and owner fields to the obvious database columns.

Data Repository

This is where it starts to get clever. A while back, if we needed to create a data repository (or Data Access Object / DAO), we’d define all the operations we want (create, read, update, delete) and then implement each operation as a method using Hibernate, SQL or whatever.

Now, the JpaRepository interface (with super interfaces PagingAndSortingRepository and CrudRepository) define methods for our standard CRUD operations. It’s a typed interface so we can define something like this:

public interface SpannerRepository extends JpaRepository<Spanner, Long> {
}

Where Spanner is the type of object managed by the repository and Long is the type of unique identifier – essentially the database table primary key.

The JpaRepository interface defines 18 methods. This could result in a great deal of boilerplate code if we were to implement by hand. Fortunately, JPA will create the implementation of this interface automatically. Mechanisms exist to trigger this implementation using XML or Java based Spring config. This configuration is enabled by default with the spring-boot-starter-data-jpa module so we need do nothing further at all.

Expose a REST interface

We now want a RESTful API for our repository. Again, we could hand craft this, but Spring Data REST takes care of this for us with a single annotation on our repository definition:

@RepositoryRestResource
public interface SpannerRepository extends JpaRepository<Spanner, Long> {
}

This will expose a well-defined RESTful API with repository CRUD operations correctly mapped to HTTP methods (POST  for create, GET for read etc) with API meta-data exposed using HAL.

Configuration

The only thing needing explicit configuration is the connection to a data source. This can be done in an application.properties file which can either be included inside the application (in src/main/resources) or provided at runtime. A connection to a MySQL database looks like this:

spring.datasource.url=jdbc:mysql://localhost:3306/spanners
spring.datasource.username=spanners
spring.datasource.password=password
spring.datasource.driver-class-name=com.mysql.jdbc.Driver

And now, the boilerplate…

There had to be some boilerplate somewhere. In this case, it’s the main class that bootstraps into Spring Boot. You always need one and it usually looks the same. It looks like this:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class);
    }
}

That’s it.

Running the app

This could be built as a WAR and deployed to an application server. Spring Boot offers us an alternative though. It can build as an executable JAR containing all dependencies and an embedded web server. So we build the app (using Maven) in the usual way:

mvn clean package

and then run like this:

java -jar ./target/spanners-api-4.0.jar

We can verify that the app is running by entering http://localhost:8080/spanners/ in a browser. It should return a valid JSON response:

{
  "_embedded" : {
    "spanners" : [ {
      "id" : 1,
      "name" : "Gertrude",
      "size" : 10,
      "owner" : "smith",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8090/spanners/1"
        },
        "spanner" : {
          "href" : "http://localhost:8090/spanners/1"
        }
      }
    }, {
      "id" : 2,
      "name" : "Samantha",
      "size" : 16,
      "owner" : "smith",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8090/spanners/2"
        },
        "spanner" : {
          "href" : "http://localhost:8090/spanners/2"
        }
      }
    }, {
      "id" : 3,
      "name" : "Susan",
      "size" : 20,
      "owner" : "jones",
      "_links" : {
        "self" : {
          "href" : "http://localhost:8090/spanners/3"
        },
        "spanner" : {
          "href" : "http://localhost:8090/spanners/3"
        }
      }
    } ]
  },
  "_links" : {
    "self" : {
      "href" : "http://localhost:8090/spanners"
    },
    "profile" : {
      "href" : "http://localhost:8090/profile/spanners"
    }
  },
  "page" : {
    "size" : 20,
    "totalElements" : 3,
    "totalPages" : 1,
    "number" : 0
  }
}

It shows the attributes of the three spanners currently in the database. It also shows paging information and a link to the service ALPS meta-data. That’s not at all bad for one annotated interface definition, a couple of annotated classes, a one line method implementation, some config and a build definition.

Building, tagging and pushing Docker images with Maven

$
0
0

A standard use case for Docker is to build a container to run a pre-built application so that the containerized app can be run on any Docker enabled host. The application and the container are sometimes developed and built separately. First the application is built, then a container is defined and built to include the application. However, it can be better to promote the Docker container to a first-class build artifact. That is, the build process always builds the deployed component and its container at the same time. This saves a manual build step and also ensures that the Docker container is always up to date with the latest application build. It allows us to easily develop and test against the Dockerized application directly – every build results in a new deployable container.

There are a number of ways to do this. This article looks at hooking the Docker tasks into the Maven build process.

The Docker Maven Plugin

The docker-maven-plugin from Spotify allows us to perform Docker operations from within Maven including build, tag and push images. The Docker operations can be run from command line through Maven but the real power comes from binding them to Maven build phases. This allows us to build new images on every build.

Build a Dockerfile

Building a Dockerfile from Maven is very straightforward. The plugin needs just a couple of configuration settings:

<plugin>
    <groupId>com.spotify</groupId>
    <artifactId>docker-maven-plugin</artifactId>
    <version>0.4.10</version>
    <configuration>
        <imageName>hotblac/${project.artifactId}</imageName>
        <dockerDirectory>${project.basedir}/docker</dockerDirectory>
    </configuration>
</plugin>

The

<imageName>
tag specifies the name of the image to be built, prefixed with your Docker Hub account name (hotblac in my case).
${project.artifactId}
makes the image name the same as the Maven project name. The
<dockerDirectory>
location contains the Dockerfile and any resources required to build the image. If any build time resources need to be added (such as the built artifact), they can be specified in the
<resources>
section.

Running

mvn clean package docker:build

will build the Maven project and the specified Docker image.

Creating images without a Dockerfile

It’s also possible to build a Docker image entirely based on Maven configuration properties. This is pretty neat if all you want is a simple Java 8 Alpine container to run an executable jar.

<plugin>
	<groupId>com.spotify</groupId>
	<artifactId>docker-maven-plugin</artifactId>
	<version>0.4.10</version>
	<configuration>
		<imageName>hotblac/${project.artifactId}</imageName>
		<baseImage>java:openjdk-8-jdk-alpine</baseImage>
		<entryPoint>["java","-jar","/${project.build.finalName}.jar"]</entryPoint>
		<resources>
			<!-- copy the service's jar file from target into the root directory of the image -->
			<resource>
				<targetPath>/</targetPath>
				<directory>${project.build.directory}</directory>
				<include>${project.build.finalName}.jar</include>
			</resource>
		</resources>
	</configuration>
</plugin>

This example creates a Docker image containing files containing the project’s jar. Just the

<baseImage>
and
<entryPoint>
command need to be specified. This is a basic configuration to build an executable jar and a simple container to run it. Again, the Docker image can be built with:
mvn clean package docker:build

Binding to build phases

The real power of the plugin is here. By binding plugin goals to build phases, we can have Maven automatically build a Docker container on every build.

<execution>
	<id>build-image</id>
	<phase>package</phase>
	<goals>
		<goal>build</goal>
	</goals>
</execution>

This simply triggers the plugin’s build goal whenever the Maven package phase is invoked.

We can also have Maven tag and push images to a Docker repo (Docker Hub for example) whenever the project is deployed:

<execution>
	<id>tag-image-version</id>
	<phase>deploy</phase>
	<goals>
		<goal>tag</goal>
	</goals>
	<configuration>
		
		<newName>docker.io/hotblac/${project.artifactId}:${project.version}</newName>
		<serverId>docker-hub</serverId>
		<pushImage>true</pushImage>
	</configuration>
</execution>
<execution>
	<id>tag-image-latest</id>
	<phase>deploy</phase>
	<goals>
		<goal>tag</goal>
	</goals>
	<configuration>
		
		<newName>docker.io/hotblac/${project.artifactId}:latest</newName>
		<serverId>docker-hub</serverId>
		<pushImage>true</pushImage>
	</configuration>
</execution>

The syntax here is a little odd so let me explain.

  • <image>
    is the name of the image to be taggged / pushed.
  • <newName>
    is an alias for the image name with the name of the repository (docker.io in this case) prepended. This tells the plugin where to push the image.
  • <serverId>
    is a lookup for a server configuration in your Maven settings.xml file. This allows the Docker repo username / password to be external to the Maven POM file which is likely to be public.
  • <pushImage>
    is a switch for the plugin’s tag goal which pushes the image after tagging it. This allows us to tag and push in a single execution step. There is also a push plugin goal, but it’s not capable of tagging the image.
  • Finally, the whole execution configuration is copied twice. This is a workaround for Docker’s special behaviour for the ‘latest’ tag. We actually want to tag and push two images: the first tagged with the project version (
    ${project.version}
    ) and the second with the
    latest
    tag. I can’t figure out a nicer way to do this.

Now, we can simply run

mvn clean deploy

and Maven will:

  • Build your project’s main artifact (jar, war or whatever)
  • Build the Docker image from your Dockerfile or from the plugin’s configuration
  • Tag the image with the Maven project version number and
    latest
  • Push the image to the Docker repo

The result is that every time the project is deployed (or released), the Maven artifacts will be published to the Maven repo and the Docker images will be published to the Docker repo. The Docker repo will then contain runnable containers for every historic application build, with no additional steps in the build or release processes. Lovely.

Try it yourself!

This example worked particularly nicely with my Spring Boot application. Because Spring Boot packages web applications as self-contained executable jars rather than wars / ears to be deployed to an application server (such as Tomcat), deployment to Docker is very simple. We can simply copy the jar into a basic Java container and start it.

The source code for this example is available in the Spanners project, version 4.1 at GitHub. And of course, the resulting Docker containers are also available at Docker Hub.

 

RestTemplateBuilder and @RestClientTest in Spring Boot 1.4.0

$
0
0

The first release candidate for Spring Boot 1.4.0 is now available. Among the enhancements are new mechanisms to build and test RestTemplates used to make calls to RESTful web services.

RestTemplateBuilder

The new RestTemplateBuilder class allows

RestTemplate
s to be configured by the REST client class. A
RestTemplateBuilder
  instance is auto-configured by Spring Boot with sensible defaults. Any custom values can be overridden as necessary.

As an example, the Spanners Demo application needs to make REST calls to a HAL enabled RESTful service and so needs the Jackson2HalModule set on the Jackson HttpMessageConverter:

Before:

@Service
public class SpannersService {

    private String rootUri;
    private RestTemplate restTemplate;

    public SpannersService(@Value("${app.service.url.spanners}") String rootUri) {
        
        this.rootUri = rootUri;
        
        ObjectMapper mapper = new ObjectMapper();
        mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
        mapper.registerModule(new Jackson2HalModule());

        MappingJackson2HttpMessageConverter converter = new MappingJackson2HttpMessageConverter();
        converter.setSupportedMediaTypes(MediaType.parseMediaTypes("application/hal+json"));
        converter.setObjectMapper(mapper);
        restTemplate = new RestTemplate(Arrays.asList(converter));
    }

After:

@Service
public class SpannersService {

    private RestTemplate restTemplate;

    public SpannersService(RestTemplateBuilder builder, @Value("${app.service.url.spanners}") String rootUri) {
        ObjectMapper mapper = new ObjectMapper();
        mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
        mapper.registerModule(new Jackson2HalModule());
        
        restTemplate = builder.messageConverters(new MappingJackson2HttpMessageConverter(mapper))
                              .rootUri(rootUri).build();
    }

As the

RestTemplateBuilder
 is already set with sensible defaults by Spring Boot, we’ll often need to do nothing more than this to configure a working
RestTemplate
:
@Service
public class MyRestClientService {

    private RestTemplate restTemplate;

    public MyRestClientService(RestTemplateBuilder builder) {
        restTemplate = builder.build();
    }

As a bonus, the

RestTemplateBuilder
 allows us to set a rootUri for all calls made using the template. This simplifies the client code from this
public Spanner findOne(Long id) {
    return restTemplate.getForObject(rootUri + "/{0}", Spanner.class, id);
}

to this

public Spanner findOne(Long id) {
    return restTemplate.getForObject("/{0}", Spanner.class, id);
}

Any request string that starts with ‘/’ has the configured root URL prepended.

@RestClientTest

The new @RestClientTest annotation simplifies REST client testing considerably. The MockRestServiceServer allows client side code to be tested against a mock server but setting up the mock has always been a little cumbersome:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = {ServiceConfig.class})
public class SpannersServiceTest {

    private static final String SERVICE_URL = "http://example.com/spanners";
 
    @Autowired
    private RestTemplate restTemplate;
    private MockRestServiceServer server;
    private SpannersService service;

    @Before
    public void configureService() {
        server = MockRestServiceServer.createServer(restTemplate);
        service = new SpannersService(SERVICE_URL);
        ReflectionTestUtils.setField(service, "restTemplate", restTemplate);

    }

    @Test
    public void testFindOne() throws Exception {

        server.expect(requestTo(SERVICE_URL + "/1")).andExpect(method(GET))
                .andRespond(withHalJsonResponse("/spanner1GET.txt"));

        Spanner spanner = service.findOne(1l);
        assertSpanner("Belinda", 10, "jones", spanner);
    }

In this pre-1.4.0 example, we test the GET specific item REST service, served by the findOne() method of a JpaRepository (see the previous post on No code REST services for details). Note that we have to describe the Spring context with the

@ContextConfiguration
 annotation so that we can inject the
RestTemplate
 configured in ServiceConfig.class. We then provide the configured
RestTemplate
 to the
MockRestServiceServer
 and to our instance of the service under test (
SpannersService
).

With the new

@RestClientTest
 annotation in Spring Boot 1.4.0 however, this is simplified greatly:
@RunWith(SpringRunner.class)
@RestClientTest(SpannersService.class)
public class SpannersServiceTest {

    @Autowired
    private MockRestServiceServer server;
    @Autowired
    private SpannersService service;

    @Test
    public void testFindOne() throws Exception {

        server.expect(requestTo("/1")).andExpect(method(GET))
                .andRespond(withHalJsonResponse("/spanner1GET.txt"));

        Spanner spanner = service.findOne(1L);
        assertSpanner("Belinda", 10, "jones", spanner);
    }

The

SpringRunner
 and
@RestClientTest
  set up the necessary Spring context and take care of dependencies for the
SpannersService
. We only need to
@Autowire
 the
MockRestServiceServer
 and the service under test (
SpannersService
) as Spring builds them for us. Note also that the test does not mention the REST service location. As the service root URI was provided to the
RestTemplateBuilder
 in
SpannersService
 , the test needs only verify the ‘query’ part of the URL. That is
requestTo("/1")
  verifies that a request is made to
http://<whateverServiceHasBeenConfigued>/1
.

The

@RestClientTest
 annotation builds on the configurability provided by the
RestTemplateBuilder
. The
@RestClientTest
 adds a MockServerRestTemplateCustomizer to the
RestTemplateBuilder
 injected into the class under test and this wires it into the
MockRestServiceServer
. For this reason, the
RestTemplateBuilder
 must be used in any class tested with
@RestClientTest
. The
@RestClientTest
 annotation will not work with classes that instantiate their own
RestTemplate
 in the old way.

Spring Boot 1.4.0 release

Both

RestTemplateBuilder
 and
@RestClientTest
 are available for preview in the recently released 1.4.0.RC1 release candidate available from the spring-milestones repo. According to the the RC1 blog post, the full release is scheduled for the end of July.

The full code for these examples is in the Spanners project at GitHub, version 4.2. The

RestTemplateBuilder
 can be seen in the
SpannersService
 class: before and after. The
@RestClientTest
 annotation can be seen in the corresponding
SpannersServiceTest
: before and after. Note also that the old
SpannersService
 used a
RestTemplate
 configured in ServiceConfig. This functionality was moved into the SpannersService class after Spring Boot 1.4.

 


Microservice discovery with Spring Boot and Eureka

$
0
0

One of the standard problems with Microservices Architecture is the issue of service discovery. Once we’ve decomposed our application into more than a handful of distinct microservices, it becomes difficult for every service to know the address of every other service it depends on. Configuring dependencies from inside a microservice is impractical – it distributes configuration among all the microservices. It also violates the DRY principle – multiple microservice instances will need access to the same configuration settings. What’s more, it goes against the Dependency Injection design that’s supposed to be one of the benefits of the Microservices Architecture.

The standard solution is to delegate location of microservices to a new microservice. In keeping with the Single Responsibility Principle, this ‘discovery’ microservice is responsible for tracking the locations of all the other microservices and nothing else.

Netflix’s Eureka is an implementation of a discovery server and integration is provided by Spring Boot. Using Spring Boot, we can build a Eureka discovery server and have our microservices register with it.

The code for the following example can be downloaded from the Spanners demo app, version 4.4 on GitHub. The full stack exists as Docker images at DockerHub and can be started with this docker-compose file.

Building a Eureka server

Spring Boot’s opinionated design makes it easy to create a Eureka server just by annotating the entry point class with

@EnableEurekaServer
:
@SpringBootApplication
@EnableEurekaServer
public class Application {

    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }

}

So long as the spring-cloud-starter-eureka-server dependency is present in the Maven / Gradle build config, this will start the application as a Eureka server. The starter dependency is part of the Spring Cloud project so you’ll want to use the latest Spring Cloud release train (currently Brixton.SR5) to manage your Maven dependency versions:

<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-eureka-server</artifactId>
	</dependency>
</dependencies>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Brixton.SR5</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Spring’s Getting Started with Service Registration and Discovery describes how the same can be managed with Gradle.

Some basic configuration is required in the application.properties (or application.yml) file:

# Not a client, don't register with yourself
eureka.client.registerWithEureka: false
eureka.client.fetchRegistry: false

server.port=8761

When this Eureka server starts, it will listen for registrations on port 8761. When our microservices start, they’ll make a call to Eureka to register themselves. Then they can query Eureka to find other registered servers. Eureka also provides a simple status console which can be viewed on http://localhost:8761.

Eureka console

This console shows that Eureka is running and that no instances are currently registered.

Registering services with Eureka

Now that we have a centralized discovery server, we need every service to register with it. This is about as easy as creating the server. First, we need to add the

spring-cloud-starter-eureka
 dependency to Maven / Gradle. Again, use the spring-cloud release train to manage versions:
<dependencies>
	<dependency>
		<groupId>org.springframework.cloud</groupId>
		<artifactId>spring-cloud-starter-eureka</artifactId>
	</dependency>
</dependencies>

<dependencyManagement>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-dependencies</artifactId>
			<version>Brixton.SR5</version>
			<type>pom</type>
			<scope>import</scope>
		</dependency>
	</dependencies>
</dependencyManagement>

Then, enable discovery with the

@EnableDiscoveryClient
 annotation on a
@Configuration
 class or the
@SpringBootApplication
 entry point class:
@Configuration
@EnableDiscoveryClient
public class RestConfig {

	// Application beans configured here
}

And finally, add a couple of configuration settings to the application.properties (or application.yml) config file. This tells the application where Eureka is and how this service should be named in Eureka:

eureka.client.serviceUrl.defaultZone=http://localhost:8761/eureka/

spring.application.name=spanners-api

Now, when we start the spanners-api service, we can see it registered in Eureka:

spanners-api registered with Eureka

Querying Eureka

In my application, our front end component (spanners-mvc) depends on two back end microservices (spanners-api and spanners-users). Instead of hard coding the locations of the two microservices in our front end component, we want it to ask Eureka. To do this we first follow the steps above to register the spanners-mvc component with Eureka. Just to check this has worked, we can look at the Eureka console to confirm that spanners-mvc and the two microservices are all registered:

spanners-mvc and two back end services registered with Eureka

Now, we can refer to the back end microservices by using their names rather than their server addresses. The current examples from Spring suggests something like this:

@Service
public class WebAccountsService {

    @Autowired
    @LoadBalanced
    protected RestTemplate restTemplate; 

    protected String serviceUrl = "http://ACCOUNTS-SERVICE"; // ACCOUNTS-SERVICE is the name of the microservice we're calling

    public Account getByNumber(String accountNumber) {
        Account account = restTemplate.getForObject(serviceUrl
                + "/accounts/{number}", Account.class, accountNumber);

        if (account == null)
            throw new AccountNotFoundException(accountNumber);
        else
            return account;
    }
    ...
}

The

@LoadBalanced
 annotated
RestTemplate
 will resolve application names (ACCOUNTS-SERVICE) to a real server name / port by querying Eureka. The
@LoadBalanced
 annotation tells Spring Boot to customize the
RestTemplate
 with a
ClientHttpRequestFactory
 that does a Eureka lookup before making the HTTP call. To make this work, you’ll need to add a new config setting to application.properties:
ribbon.http.client.enabled=true

This just switches on the Ribbon load balancing behind the

@LoadBalanced
 annotation.

If you’re interested, RibbonAutoConfiguration does the customization of the

RestTemplate
  and RibbonClientHttpRequestFactory does the Eureka lookup. This is all done for us, just by adding the
@LoadBalanced
 annotation to a
RestTemplate
 bean.

Eureka with Spring Boot 1.4 RestTemplateBuilder

As of Spring Boot 1.4, it’s no longer recommended to directly

@Autowire
 an instance of
RestTemplate
 into a Rest client class. Instead, we can use the
RestTemplateBuilder
 to give us some more flexibility in configuring the
RestTemplate
. It also allows us to use the new @RestClientTest annotation to test Rest clients. More details on the advantages of the new
RestTemplateBuilder
 in my previous post on the subject.

If we don’t

@Autowire
 a
RestTemplate
 bean though, we can’t use the
@LoadBalanced
 annotation to customize the
RestTemplate
 for Eureka lookups. At the time of writing, Spring has no built in solution for this as the current Spring Cloud release train (Brixton) was built for Spring Boot 1.3 – without
RestTemplateBuilder
 support. If you want to use the
RestTemplateBuilder
 with Eureka, you’ll need to customize the
RestTemplate
 yourself. This is very straightforward:
@Configuration
@ConditionalOnClass(HttpRequest.class)
@ConditionalOnProperty(value = "ribbon.http.client.enabled", matchIfMissing = false)
public class RestClientConfig {

    /**
     * Customize the RestTemplate to use Ribbon load balancer to resolve service endpoints
     */
    @Bean
    public RestTemplateCustomizer ribbonClientRestTemplateCustomizer(
            final RibbonClientHttpRequestFactory ribbonClientHttpRequestFactory) {
        return new RestTemplateCustomizer() {
            @Override
            public void customize(RestTemplate restTemplate) {
                restTemplate.setRequestFactory(ribbonClientHttpRequestFactory);
            }
        };
    }

}

This is pretty much a copy of the RibbonAutoConfiguration.RibbonClientConfig bean used to power the

@LoadBalanced
 annotation except that it configures an instance of
RestTemplateCustomizer
. The
RestTemplateBuilder
 automatically pulls in all configured
RestTemplateCustomizer
s when it initializes so we don’t need to manually inject this into anything.

The conditional annotations on this configuration class create the

RestTemplateCustomizer
 only if Ribbon (and Eureka) is enabled.

With this bean present, we can use the

RestTemplateBuilder
 in our Rest client code and it will resolve our application names, just as if we had use the
@LoadBalanced
 annotation.
Viewing all 16 articles
Browse latest View live