Dancing the Lambada with AWS Lambda or sending emails on S3 events

Introduction

The aim of this post is to offer a quick start to using AWS Lambda by using a small application for illustration. It is meant for someone who has some experience using Node.js (although I had very little when I started) and Javascript. So please forgive any non-JS idioms.

AWS Lambda is presented by Amazon as a service to run short-lived Node.js processes that are triggered in response to events that occur in your Amazon infrastructure. Of course you can do this already using home grown methods. But by using Lambda you do not have to have an EC2 instance running or support tasks such as scaling Node.js and backups.

As with any service there are limitations that you should be aware of. The most important are:
1. You have no direct access to the infrastructure
2. Only Node.js (Javascript) applications are supported. There are hints this may change since the descriptor for an application has a field for the runtime. Ex. “Runtime”: “nodejs”
3. Applications can only run for a maximum of 60 secs before they are killed.

For those who just want to get to the code, here it is: https://github.com/rmauge/aws-lambda-s3-email

Setup

The project README.md has all the tasks to install a complete development environment. This post will reference it as necessary so consider the README.md the source. You need to have an AWS http://aws.amazon.com/ account to try this out. I use Ubuntu so the commands expect that environment but it should be fairly straight forward to adapt for other OSes. Complete up to and including step 4 before continuing.

1. Install pip
* Download: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
* Install: sudo python get-pip.py
2. Use a python virtual environment
* Install: sudo pip install virtualenv
* Create: virtualenv lambda-env
* Activate: source lambda-env/bin/activate
3. Install node using a virtual environment
* Install nodeenv: pip install nodeenv
* Install node: nodeenv -p –node=0.10.32 nenv
* Update npm (installed with node): npm install npm -g
4. Install Amazon AWS [cli tools](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html)
* Install: pip install awscli
* Configure: aws configure
5. Update Lambda Function
* Clone project: git@github.com:rmauge/aws-lambda-s3-email.git
* Zip directory:
“`
cd aws-lambda-s3-email/ && zip -r ../aws-lambda-s3-email.zip . -x build/ test/ *~ *.git* .gitignore && cd ..
“`
* Submit update:
“`
aws lambda upload-function
–region us-east-1
–function-name lambdaSubmissionFunction
–function-zip aws-lambda-s3-email.zip
–role arn:aws:iam::99999999:role/lamdba_exec_role
–mode event
–handler index.handler
–runtime nodejs
–debug
–timeout 10
–memory-size 128
“`
6. Test function manually
* “`
aws lambda invoke-async –function-name lambdaSubmissionFunction –region us-east-1 –invoke-args aws-lambda-s3-email/test/submissionexample.txt –debug
“`
* When testing the key must exist in the S3 bucket

AWS Policies and Roles

The Lambda Function requires certain roles/policies to work. These are:
An IAM (AWS Identity and Access Management) user to send email (ses-smtp-user)
A role and policy allowing a bucket event to invoke a Lambda Function (lambda_invoke_role)
A role and policy allowing the Lambda Function to access the S3 bucket and logging (lamdba_exec_role)

In a nutshell you need to install Node.js http://nodejs.org/ and Amazon CLI tools http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html

I used a python virtual environment to isolate changes but how you install is up to you.

On the Amazon side you must have the proper users, roles and policies set up so that the various pieces can do their work. But we will get to this later on.

At step 5 you can edit the Lambda Function and settings after cloning the repo to reflect your AWS settings. The config.js file is illustrative.
Since the Lambda Function needs permissions to do it’s work, we have to setup permissions to the S3 buckets and the logs.

Upload and Configure the Lambda Function

We will create a role in IAM that has the required policy:
L1. Go to the IAM dashboard https://console.aws.amazon.com/iam/home?region=us-east-1#home
L2. Create a new role (the “exec” role) with a new policy attached or inline. This is the role that the function assumes when running. Ex policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::your-bucket/*"
]
}
]
}

You can do this by in IAM by creating a new role with a chosen name “lambda_exec_role” for example. Role Type: “AWS Lambda”. Then attaching the policies “AmazonS3FullAccess” and “CloudWatchLogsFullAccess”.

Make note of the Role ARN (Amazon Resource Name) after it is created, you will need it later.

L3. We will also need a role for the S3 bucket to assume to send an event to the function.
Create another role (the “invoke” role) with the following policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"*"
],
"Action": [
"lambda:InvokeFunction"
]
}
]
}

Also only allow the S3 buckets to assume this role. This is the trusted entity profile.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"ArnLike": {
"sts:ExternalId": "arn:aws:s3:::*"
}
}
}
]
}

You can do this by in IAM by creating a new role with a chosen name “lambda_invoke_role”. Role Type: “AWS S3 Invocation for Lambda Functions”. Policies: AWSLambdaRole.

Now you can zip and upload the function as in Step 5 with the role ARN, the exec_role, as recorded in L2.
--role arn:aws:iam::99999999:role/lamdba_exec_role

If the upload is successful then you need to get the ARN of the function. It should be in results of the aws lambda upload-function command just run. Or you can find it in the console https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions.
Make a note of it for later.

Configure S3 Bucket events

The S3 bucket now has to be configured to call that function with that role when an event such as a post, put, etc takes place.

S1. Go to https://console.aws.amazon.com/s3/home?region=us-east-1
S2. Click on the bucket and go to properties.
S3. Under events add a new Notification
S4. Give it a name and choose the Events you need: post, put etc.
S5. Choose Send to Lambda Function and fill in the function ARN from the return value in Step 5. Fill in the Role ARN with the “lambda_invoke_role” from L3.
Save.

Setting up the Amazon Simple Email Service (SES)

Since the Lambda function sends emails these permission also have to be in place.
Make note of your SMTP settings since these are used by the npm mailer which we are using https://github.com/andris9/Nodemailer. In particular make note of the server region which may have to be configured. But since I am using us-east-1 no changes had to be made.

Create a new user to send mail.

Go to https://console.aws.amazon.com/ses/home?region=us-east-1 and create a new user “ses-smtp-user”, for example. make note of the credentials. These are used in config.js to send emails.

"SES_STMP_USERNAME" : "YOUR_SMTP_USERNAME",
"SES_SMTP_PASSWORD" : "YOUR_SMTP_PASSWORD",

You also may need to verify an email address used for sending the email. Again set in config.js.

"defaultEmailFrom" : "you@example.com",

If you are using the example, as is, you will need to zip the package up again and upload.

Email templates

I choose to store email templates in yaml rather than json format since it supports line breaks. And is easier on the eyes. The location of these are configured in config.js. And there is a sample at email-templates/templates.yml.

Testing

All should be ready now to test the function from the command line. The sample code expects an email meta key field which is used to get the email to sent to. But your case may be different.
Change the file test/submissionexample.txt to reflect your Amazon settings. Step 5 has the command to test the function.

To check on the progress take a look at the logs at https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logs:

These should alert you to any problems.

I know it has been a lot to go through but you should now have a good grasp of working with Amazon Lambda. Have fun.

Advertisements
Posted in Uncategorized | Tagged , , , , | Leave a comment

Advanced Behavior-Driven Development (BDD) using Rest-Assured

Behaviour-Driven Development (BDD) is an improvement over Test-Driven Development, but in more of a social manner rather than in strictly technical terms. Specifically, it enriches the description of how a system should behave under testing by using language that is accessible to all stakeholders. If you want learn the reasoning that led to BDD Dan North, the developer, offers what led to his discovery.

The catalyst for my use of BDD was a requirement to test a REST api that was being used by a number of clients including an Angular web app, an IOS app and an Android app.

The REST api itself was implemented in Drupal (PHP) but being REST (via HTTP) it was agnostic as to the programming language (as I am) used for testing. Initially I attempted to use a combination of JUnit along with the Apache Commons HttpClient library but this proved cumbersome; hiding what was being tested behind the supporting code. I then explored several BDD tools before settling on Rest-assured. I like it because in is mature, has an active code-base and a nice DSL (quite a feat since it is written in Java!).

The usage guide will get you up and running so I will go straight into code since I assume that is why you are here. This particular example solution deals with the common case of needing a user to be logged in to perform certain tests, logout for instance.  In Rest-assured Filters can be used to realize this need.

package com.raymondmauge.go.api.tests.filters;

import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger;

import com.jayway.restassured.filter.Filter;
import com.jayway.restassured.filter.FilterContext;
import com.jayway.restassured.path.json.JsonPath;
import com.jayway.restassured.response.Cookie;
import com.jayway.restassured.response.Cookies;
import com.jayway.restassured.response.Header;
import com.jayway.restassured.response.Headers;
import com.jayway.restassured.response.Response;
import com.jayway.restassured.specification.FilterableRequestSpecification;
import com.jayway.restassured.specification.FilterableResponseSpecification;

/**
 * Examines a response for user authentication identifiers and if found adds these
 * to subsequent requests using this filter.
 * Adds header, X-CSRF-Token: {token}
 * Set Cookie, Set-Cookie: {sessionName}={sessionId}
 * @author rmauge
 *
 */
public class AuthFilter implements Filter {
	
	private static Logger log = Logger.getLogger(AuthFilter.class);
	
	private String headerNameKey;
	private String sessionNameKey;
	private String sessionIdKey;
	private String tokenNameKey;
	
	private String sessionName;
	private String sessionId;
	private String token;
	
	public AuthFilter(String headerName,
					  String sesNameKey,
					  String sesIdKey,
					  String tokenKey) {
		headerNameKey = headerName;
		sessionNameKey = sesNameKey;
		sessionIdKey = sesIdKey;
		tokenNameKey = tokenKey;
	}

	public Response filter(FilterableRequestSpecification requestSpec,
			FilterableResponseSpecification responseSpec, FilterContext ctx) {
		
		if (StringUtils.isNotBlank(sessionName) &&
				StringUtils.isNotBlank(sessionId) &&
				StringUtils.isNotBlank(token)) {

			Headers headers = requestSpec.getHeaders();
			if (!headers.hasHeaderWithName(headerNameKey) ) {

				requestSpec.header(new Header(headerNameKey, token));
			}

			Cookies cookies = requestSpec.getCookies();
			if (!cookies.hasCookieWithName(sessionName)) {
				requestSpec.cookie(new Cookie.Builder(sessionName, sessionId).build());
			}
			return ctx.next(requestSpec, responseSpec);
		} else {
			final Response response = ctx.next(requestSpec, responseSpec);
			String json = response.asString();
			JsonPath jsonPath = new JsonPath(json);
		
			sessionName = jsonPath.getString(sessionNameKey);
			sessionId = jsonPath.getString(sessionIdKey);
			token = jsonPath.getString(tokenNameKey);
		
			// Don't log sensitive credentials
			log.debug(String.format("Got sessionName: %s, sessionId: %s, token: %s",
								StringUtils.abbreviate(sessionName, 10),
								StringUtils.abbreviate(sessionId, 10),
								StringUtils.abbreviate(token, 10)
								));
			return response;
		}
	}
}

For this api when a user is successfully logged in then a json response is returned that contains the authentication information:
sessionName, sessionId, and token.

If these are found (only during login) they are stored in member variables of the filter. Otherwise they are retrieved and used for subsequent requests.

The filter is setup for use by JUnit/Rest-assured in a base class but it can of course be the same class:


package com.raymondmauge.go.api.tests;

import org.apache.log4j.Logger;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import com.raymondmauge.go.api.tests.exceptions.AuthException;
import com.raymondmauge.go.api.tests.filters.AuthFilter;
import com.raymondmauge.go.api.tests.util.Config;
import com.raymondmauge.go.api.tests.util.TestFixtures;

import com.jayway.restassured.RestAssured;
import com.jayway.restassured.builder.RequestSpecBuilder;
import com.jayway.restassured.specification.RequestSpecification;

/**
 * Initializes variables that are useful for many tests.
 * System properties expected:
 * "settings_file": YAML file containing settings. Default,  "settings.yml"
 * @author rmauge
 *
 */
public abstract class BaseTest {
	
	protected static Config config = null;
	protected static RequestSpecification requestSpec = null;
	protected static AuthFilter authFilter = null;
	private static Logger log = Logger.getLogger(BaseTest.class);
	
	@BeforeClass
	public static void baseSetUp() throws AuthException {
		log.debug("BaseTest setup");
		config = new Config(System.getProperty("settings_file", Config.DEFAULT_SETTINGS_FILENAME));
		authFilter = TestFixtures.getAuthFilter(config);
		RequestSpecBuilder builder = TestFixtures.getDefaultBuilder(config);
		requestSpec = builder.build();
	}

	@AfterClass
	public static void baseTearDown() {
		log.debug("BaseTest teardown");
		RestAssured.reset();
		config = null;
		requestSpec = null;
		authFilter = null;
	}
}

A helper class TestFixtures is used to intialize a new filter instance by reading in user login username and password etc from a config file (yml). This can be used directly in code but this makes changes easier.

/**
	 * 
	 * This is an expensive operation but the result can be re-used. 
	 * It expects that the following yml properties are set:
	 * client_csrf_header_name, user_session_name_key, user_session_id_key, user_token_key
	 * @param config yml file with test properties
	 * @return A filter that has the proper credentials to send a request on 
	 * behalf of a logged in user.
	 * @throws Exception 
	 */
	@SuppressWarnings("unchecked")
	public static AuthFilter getAuthFilter(Config config) throws AuthException {
		AuthFilter authFilter = new AuthFilter(
				config.get("client_csrf_header_name"),
				config.get("user_session_name_key"),
				config.get("user_session_id_key"),
				config.get("user_token_key")
				);
		
		JSONObject requestBody = new JSONObject();
		requestBody.put("email", config.get("user_email_valid"));
		requestBody.put("password", config.get("user_password_valid"));
		Response response =
		given().
			filter(authFilter).
			spec(getDefaultBuilder(config).build()).
			body(requestBody.toJSONString()).
		when().
			post("/user/login").
		then().
			extract().
			response();
		
		if (response.statusCode() != 200) {
			throw new AuthException(
					String.format("Authentication failed: HTTP %d", response.statusCode()));
		}
		
		return authFilter;
	}

If the above code is successful then the filter is ready for use. I am using rest-assured outside of a test here because it is so awesome. But now I have to manually check for an invalid HTTP status code :(.

Now for the actual usage of the filter in a test:

package com.raymondmauge.go.api.tests;

import static com.jayway.restassured.RestAssured.given;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.notNullValue;

import org.json.simple.JSONObject;
import org.junit.Test;
import com.raymondmauge.go.api.tests.exceptions.AuthException;
import com.raymondmauge.go.api.tests.filters.AuthFilter;
import com.raymondmauge.go.api.tests.util.TestFixtures;

/**
 * Tests User API
 * @author rmauge
 *
 */

public class UsersTest extends BaseTest {
	
	@Test
	public void getUser() {
		given().
			filter(authFilter).
			spec(requestSpec).
		when().
			get("/user/").
		then().
			statusCode(200).
			body("user.email", equalTo(config.get("user_email_valid")));
	}
}

The filter that is populated in the Base class is now used for any tests that require a logged in user!

I hope that this has been helpful in exposing you to BDD.

P.S
Here is the code that implements reading from the yaml config file. It uses the Snake YAML library

package com.raymondmauge.go.api.tests.util;

import java.io.InputStream;
import java.util.Map;

import org.yaml.snakeyaml.Yaml;

/**
 * Helper class used to get properties from a yml file
 * @author rmauge
 *
 */
public class Config {
	private Map settings;

	@SuppressWarnings("unchecked")
	public Config(String configFile) {
		if (settings == null) {
			InputStream in = Thread.currentThread().
							 getContextClassLoader().
							 getResourceAsStream(configFile);
		
			settings = (Map) new Yaml().loadAs(in, Map.class);
		}
	}
	
	public String get(String key) {
		return settings.get(key);
	}
	
	public String get(String key, String def) {
		String val = get(key);
		return (val == null ? def: val);
	}
}

Posted in Uncategorized | Tagged , , , , | 2 Comments

System arguments in Python Windows Service

I recently had to install a Python application as a Windows Service and lost a few hours to a seemingly obvious problem involving arguments.

And of course I solved it rather quickly by using an old “trick”; walking away and then coming back when I was refreshed.

I used the Python for Windows extensions, pywin32, library to create the service which inherited from win32serviceutil.ServiceFramework

The __init__ method receives two arguments; ‘self’ and ‘args’ but you can also always access sys.argv as normal.

But it turns out that sys.argv only contains the name of the PythonService, for example, [‘C:\\Python27\\lib\\site-packages\\win32\\PythonService.exe’] which is used to actually call your main program.  It does not contain any start parameters that you may have added to the Windows Service Manager when starting your service. But you can find these start parameters in the ‘args’ parameter.

The python application uses argparse.ArgumentParser from the argparse library. The application was failing with argparse complaining

File "C:\Python27\lib\argparse.py", line 1937, in _parse_known_args
self.error(_('too few arguments'))

I tried every combination of args before I hit upon the answer:

The ‘args’ parameter is a tuple!, Not a list as expected by a call to parser.parse_args().

Also sys.arvg has insufficient elements as it is received by __init__

So the solution is to change sys.argv to be the list that parser.parse_args() expects but using the elements in args.

sys.argv = [arg for arg in args]

Note:

This works when passing the arguments using the Windows Service Manager “start parameters” field. But this value is not persisted (Why Microsoft, why?) so using the registry may be good place to store values needed for subsequent runs.

Another workaround is to edit the registry value for your service.

In the Registry Editor window, navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\<ServiceName> and open the key ImagePath.

After the path to the PythonService you may add arguments.

Now you will NOT need to mess with sys.argv and it can be left as is since it now has the arguments in a list as expected.

 

 

Posted in Uncategorized | Tagged , , , , | Leave a comment

The Right of the People to keep and bear infamous language constructs

Having tried Ruby/Rails quite some time ago (2007) and disliking the “magic”, I decided to give the dynamic duo another chance to see what the older me thinks of them today.

So far I dislike Ruby less :p. It does take a while to get the knowledge needed to successfully grok a non-trivial Ruby program. What with it’s “yield” statement, callable blocks and <=> operator, it appears to be trying to please every type of programmer. But that’s ok.

What really got my attention was the catch/throw construct:

def i_throw_something()
  print "Please do not enter something bad "
  answer = readline.chomp
  throw :something_bad if answer == "something bad"
  return answer
end

catch :something_bad do
  i_throw_something
  puts "Great! something bad was not entered."
end

It is as close to the much derided “goto/label” construct that I have seen since my glory days programming Apple Basic.

But!, The older me is more tolerant so even though I may disagree with the use of goto I do not question its presence. After all it is only the innocent programmers that are left to fend for themselves if there is language control. The bad programmers are already free to wield code in any of the “makes my eyes bleed” ways that they can think of. I commend Matz et al for the bold steps in allowing this and other not so well received parts into the language.

And I appeal to other language designers/committees give us our guns, er gotos, and at least a way to defend ourselves when/if we have the need.

Posted in Uncategorized | Tagged | Leave a comment

The Expert Builder Paradox

It can be a source of argument as to the qualities and experience than one expects to be embodied by an “expert” in any field. I do not want to entertain this argument here; let that lie in the domain of Socrates or Plato. For this illustration let us agree that an expert software developer is one that is recognizable by other “expert” developers. I know I am cheating here, so let’s move on.

Experts are able to achieve Herculean accomplishments based on sheer skill. What appears to the naive outsider as effortless is actually a manifestation of hundreds of hours of refinement. A great tennis player has put many hours into his art, as does a great singer or magician or carpenter or violinist. There are at least two observations that can be made:
1. There are no shortcuts to achieving these skill levels and
2. The skills cover a very narrow domain and are not generally applicable elsewhere.

Expert software developers have likewise put many hours into honing their art. But for the expert software developer those two observations may not hold.

With the readily available general utility libraries available an expert software developer can build a reasonably complex web application over a few weekends. An expert violinist would probably be envious that they cannot use some “violin” library to add to a piece they were playing.

Also the expert developer can reuse his programming skills in other “domains”. For example, with a little motivation a kernel developer can move into web development and a business application developer can move into game development. Programming is after all very malleable.

The very nature of modern software development allows one to build things that are greater than would generally be expected from an expert in other fields. And to reiterate, I really do mean “experts” here, not the overnight “hacker” project prototypes that are passing for products lately.

But all suffer from the Expert Builder Paradox; that any application built by an expert and matching his skill is more likely to contain fatal flaws than one built by a novice that also matches his skill.

I remember building my first Perl and HMTL (and java applet of course) web application in 1999 and I can even remember some of the exact variable names I used. And I don’t so fondly recall how vulnerable it was to SQL injection.
I also remember at around the same time saving the HTML source of web forms of high profile sites and manipulating the elements to my advantage before submitting. I do not think these developers were any less expert than myself.

So while experts artisans in other fields are “protected” by the medium itself from building things that can have fatal flaws, software developers do not have that protection. So they must be especially vigilant not to be deceived by the Expert Builder Paradox.

Try to remember this when you embark on your next big project so that when it feels too “easy” you know what to look for.

Posted in Uncategorized | Tagged | Leave a comment

Monitor a directory for automatic upload to Amazon S3 with S-3PO

My employer has given me permission to open source the application that I developed to move our media to Amazon S3.
I think that it can be part of any solution that needs this functionality and I hope that the community finds it useful and contributes.

Find the source at Github

Here is the description from the README

S-3PO

Drop files on your filesystem and have them automatically uploaded to Amazon S3. S-3PO listens for files copied to a directory that you choose. The files are uploaded to Amazon mirroring the structure of your local file system. After upload the files are deleted, leaving only the empty directory structure in place. Errors can be sent to an email address if needed.

Enjoy!

Posted in Uncategorized | Tagged , , | Leave a comment

ManyToOne, OneToMany Indexed Collection Complete Example

Here is the complete code for a ManyToOne,OneToMany Indexed Collection. It took way too long to put this together so I hope it will save many hours for others who need to solve the same problem.

package models;

import java.util.ArrayList;
import java.util.List;

import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.CascadeType;
import javax.persistence.Entity;
import javax.persistence.JoinColumn;
import javax.persistence.OneToMany;

import org.hibernate.annotations.IndexColumn;

@Entity
public class GameQueue {
@Id
@GeneratedValue
public Long id;

public Long getId() {
return id;
}

@OneToMany(cascade=CascadeType.ALL, orphanRemoval=true)
@IndexColumn(name="position", base=1, nullable=false)
@JoinColumn(name="gameQueue_id", nullable=false)
protected List gameQueueItems = new ArrayList();

public List getGameQueueItems() {
return gameQueueItems;
}
public void setGameQueueItems(List gameQueueItems) {
this.gameQueueItems = gameQueueItems;
}

public void addGameQueueItem(GameQueueItem item) {
gameQueueItems.add(item);
item.setGameQueue(this);
}
}

package models;

import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;

@Entity
public class GameQueueItem {
@Id
@GeneratedValue
public Long id;

public Long getId() {
return id;
}
/** The position of this item record within the list of items belonging to the parent GameQueue */
@Column(name="position", insertable=false, updatable=false)
protected Integer position;

@ManyToOne
protected Product product;

@ManyToOne(optional=false)
@JoinColumn(name="gameQueue_id", insertable=false, updatable=false, nullable=false)
protected GameQueue gameQueue;

public Product getProduct() {
return product;
}
public void setProduct(Product product) {
this.product = product;
}

public String getStatus() {
return status;
}
public void setStatus(String status) {
this.status = status;
}
public Integer getPosition() {
return position;
}
public void setPosition(Integer position) {
this.position = position;
}
public GameQueue getGameQueue() {
return gameQueue;
}
public void setGameQueue(GameQueue gameQueue) {
this.gameQueue = gameQueue;
}
}

Posted in Uncategorized | Tagged | Leave a comment

Apache as a front-end proxy to a Play! Application on Ubuntu

Here are some instructions to get a Play Application working with Apache2 as a proxy.
I am using Ubuntu 10.04 and assume that Apache has already been installed.

Create a file that contains the name of your virtual host at “/etc/apache/sites-enabled”.
To easily distinguish it, name it after your domain name. Ex. mysite

The contents of this file should be similar the following:


<VirtualHost *:80>
        ProxyPreserveHost On
        ProxyPass / http://127.0.0.1:9000/
        ProxyPassReverse / http://127.0.0.1:9000/
        ServerAdmin admin@mysite.com
        ServerName mysite.com
        ServerAlias www.mysite.com

        <Location />
          Order allow,deny
          Allow from all
        </Location>

        ErrorLog /var/log/apache2/mysite-error.log

        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn

        CustomLog /var/log/apache2/mysite-access.log combined
</VirtualHost>

To use the setting you must have the correct modules loaded by Apache. The ones you need are proxy and proxy_http.

Enable these by running

a2enmod proxy
a2enmod proxy_http

To enable your new site you should run

a2ensite mysite

Posted in Apache, Play framework | Leave a comment

Play! Framework render template shortcut

Sometimes you want to render directly to a known template from a controller.

To do this you can use the documented method renderTemplate(String templateName, Object[] args)

Doing so requires that the first parameter be the complete path to the template.
renderTemplate("Registration/signup.html", user)

But there is a shortcut that allows you to use the “@” symbol with the fully qualified class and method.

renderTemplate("@Registration.signup", user) or render("@Registration.signup", user) which gets expanded to "Registration/signup.html".

And if the template exists in the same Controller you can omit the class and simply use the method name.

render("@signup", user) which is expanded to "Registration/signup.html" if called from the Registration controller.

Posted in Play framework | 1 Comment

Uploading a file using HTTP Post and Play! Framework Databinding on Google App engine

Using an http POST and sending to a Play! controller fails if the Controller action declares the request as java.io.File. So doing this would give a null value for the File variable in the controller:

HTML Form:

<form action="@{Application.uploadFile()}" method="POST" enctype="multipart/form-data">
    <input type="text" name="title" size="40"/>
    <input type="file" id="myfile" name="myfile" />
    <input type="submit" value="Send it..." />
</form>

And Controller:

    public static void uploadFile(File myfile, String title) {
    	if (myfile != null) {
    		Logger.info("Got a file");
    	} else {
    		Logger.info("Myfile is empty");
    	}
    	Logger.info("Title: " + title);
    }

To fix this you should not use File but instead use play.data.Upload type as the argument type:

    public static void uploadFile(Upload myfile, String title) {
    	if (myfile != null) {
    		Logger.info("Got a file");
    	} else {
    		Logger.info("Myfile is empty");
    	}
    	Logger.info("Title: " + title);
    }

For the curious the first instance fails on GAE because on GAE file writes are not allowed and the error

java.security.AccessControlException: access denied (java.io.FilePermission…)

is thrown in the Play! method public Map parse(InputStream body)  which tries to save the File to disk.

You can also retrieve the contents by using an undocumented variable in the request. You can obtain a List of Uploads from your controller using

List<Upload> uploads  = (List<Upload>) request.current().args.get("__UPLOADS");

But this is method is hacky and may break in the future.

Posted in Google App Engine, Play framework | Leave a comment