What's new in Django community blogs?

Continuously rebuild your project

[Archived Version] □ Published at Writing on django | David Winterbottom

New developers joining a project will often find that the project won't build cleanly on their machine, and hours of time will be sunk into setting up the project so work can start. This is sad and expensive for all concerned.

This is a particular menace in agencies (or anywhere with lots of small projects) where a large team of developers need to jump between projects. Tools like Vagrant and Docker can help but aren't the panacea they first seem to be [*].

Counter this by using continuous integration to build your project from scratch. Then any changes that break the build process (such as database schema changes not applying correctly) will be spotted early. New team members will ...

Get Data From WMS Layers Using GDAL and Python

[Archived Version] □ Published at Ivan Pasic Blog posts

Hello everyone, I haven't been writing for quite some time because I've been very busy last months but from now on I'll try to write as often as I can.

Today I'd like to talk a little bit more about GDAL library, WMS service and how to extract different type of images from WMS layers. 
Before you keep reading please let me tell you that ever since I started learning more about GIS I've always prefered vector data before raster and I've honestly never used GDAL commands before.
However, now I came up with some project idea where I should find a way to handle some orthophoto imagery and do some raster image processing.
That's why I decided to learn more about it and to share with you some things that I've learned while playing with it.

Let's first give short explanation of terms that will be used here.

GDAL (Geospatial Data Abstraction Library) is a library for reading and writing raster geospatial data formats.                              It's released by the OSGEO (Open Source Geospatial Foundaiton) under X/MIT type of Open Source license. OGR library is part of the GDAL source tree and it provides similair capability for vector data. You can read more about GDAL/OGR on its official site.

If you want to try by yourself examples described in this post then you should first install GDAL. You are free to choose the way to do it but if you are to lazy to figure out how to do it then this should do the job for you:

cd /usr/local/src
wget http://download.osgeo.org/gdal/gdal-1.9.0.tar.gz
tar xvfz gdal-1.9.2.tar.gz
cd gdal-1.9.0
./configure --with-python
sudo make install

WMS (Web Map Service) is OGC standard for serving georeferenced map images over the Internet. There are many WMS Servers that can serve WMS layers but probably the most often used are Geoserver and Mapserver.

WMS requests support this operations:
GetCapabilities - retrieves metadata about the service (e.g. map image format, WMS version, map bounding box, CORS, ..)
GetMap - retrieves map image for defined area and content
GetFeatureInfo (optional) - retrieves geometry and attribute data for specific location on map
GetLegendGraphic (optional) - returns map legend 
DescribeLayer (optional) 

To help you get better idea of one simple GetMap request we can try it on some real world example. For this purposes I will use WMS layer which contains DOF (Digital Orthopohoto) imagery of my country (Croatia). 


If you just copy and paste this url in your browser you should see small image (256x256 pixels) in left upper corner of the screen. It's response for your GetMap request (if you don't see any image then it's probably because that server is not in function anymore, in that case please write a comment below this post so I can change it).

If you take a closer look at the defined url you can see that we set different parameters there. Here is short description for each of them.
service - service name
request - operation name
version - service version
layers - layers to display
srs - Spatial Reference System for map output
format - format for the map output
width - width of map output (in pixels)
height - height of map output (in pixels)
bbox - bounding box for map extent (minx,miny,maxx,maxy)

Those are just some of parameters you can define for WMS GetMap request but for purpose of this example there is no need for additional parameters. 

So, as you can see, we defined set of parameters to get jpeg image of specific area within some WMS layer. Although this can often be very useful and interesting sometimes we need better way to access some WMS service and its data. 
For example, fif you need to save response images then you probably don't want to do it manually by typing url in your browser every time and then saving it. 

As you can assume it can easily be done with Python.

One way for it is to use urllib library and then directly call a WMS and write the response out to a file. However, I'd like to present you another approach this time which provides you much more possibilites.

Accessing web image services using OWSLib

If you haven't found it already, there is great library called OWSLib.
As it says in its description - "OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models."
It can be used for some other OGC services too but in this example we will use it just for working with WMS service.

Here is simple code where you can see how to do previous GetMap request using OWSLib library. 

from owslib.wms import WebMapService

wms = WebMapService('http://ganimed.geoportal.dgu.hr/cwms', version='1.1.1')

img = wms.getmap(layers=['DOF'],
        srs = 'EPSG:3765',
        bbox = (399609, 4887306, 400326, 4888023),
        size = (256,256),
        format = 'image/jpeg',
        transparent = True

out = open('dof_img.jpg', 'wb')

This can be very useful when you develop some web applications and need to make different WMS requests based on your user inputs (e.g. bounding box) and similair. 
However, please visit OWSLib official documentation to get better idea of what you can all do with this library.

Accessing web image services using WMS in GDAL

Beside previously described approach, accessing several different types of web image services is also possible using the WMS format in GDAL.
To do that, you first need to create local service description XML file. 
For example, if you want to get same image as before (when we used Python) then your XML file should look somehow like this:


    <Service name="WMS">

Now when you have XML file created you can run GDAL commands on it. 
For the beginning let's try to get information about our file:

gdalinfo path/to/gdal_wms_example.xml

You should see output with set of different informations (e.g. driver, size, coordinate system, ..)

However, we are probably more interesting in doing something more specific with our file.
Here you can find all utility programs that are distributed with GDAL (http://www.gdal.org/gdal_utilities.html)
Probably the most often used command is gdal_translate. We will use it here to get image output (.jpeg file) of our defined area based on previously created XML file. 

gdal_translate -of JPEG -outsize 256 256 gdal_wms_dgu.xml dof_img2.jpg

As you can probably guess this command gives us JPEG image output (256x256 pixels) based on our XML file.
If you compare it to our dgu_dof.jpg image that we got by running our python script then you should see that they are the same.
Please note that you need to explicity define image output because by default it is set to GeoTIFF (GTiff) so instead of .jpg file you would get .tif file.
Actually, let's try to do that because if you are working with map images and GIS applications then you would probably need GeoTIFF images before jpeg. 

gdal_translate -of GTiff -outsize 256 256 gdal_wms_dgu.xml dof_geotiff.tif

Now if you want you can get more info about our output image by running:

gdalinfo dof_geotiff.tif

To find out more possible operations on raster data (like resampling and rescaling pixels, setting ground control points on output images,...) please visit http://www.gdal.org/gdal_translate.html

At the end, I'd just like to mention one more thing about running GDAL commands within Python.
First, it's important to mention that you can use GDAL within Python if you install it as Python Package https://pypi.python.org/pypi/GDAL/.
Honestly as I've just started exploring this raster image processing with GDAL I didn't have time to test how to use the Python GDAL/OGR API but I definitely willl and maybe I'll write new post then.
However, I'd like to remember you that Python has subprocess module which can be used for accessing system commands.

So if you want to run previous GDAL command within python script you can always do it like this:

import subprocess

subprocess.call(["gdal_translate", "-of", "GTiff", "-outsize", "250", "250", "gdal_wms_dgu.xml", "dof_img2.tif"])

That would be all for this post, I hope some of you will find it useful or at least it will help you get an idea for some of you future tasks and projects. I will continue studying this field and I hope I will write more about it.

Feel free to comment and ask anything you want below.

Query and Filter Leaflet Map Layers

[Archived Version] □ Published at Ivan Pasic Blog posts

Most of the web maps today are developed using same philosophy. They have different type of data organized in layers to achieve better and easier visualization of those data. Using simple controls they allow users to control what layers they want to see on map.

However, although it's very good way to provide better and easier visualization of map data it's often not enough.

Sometimes we want to query each of those layers and filter data within each layer to get only results we are interested in.
For example, if you have layer cities maybe you would like to display only cities that are larger then 1000m2 or that have more then 100 000 citizens and so on.

In this tutorial I will try to explain you how to filter map data based on their attributes.
To make this more easier to understand we'll use some real world example. So, let's say that in our DB we have table ApartmentsNY with those fields:
- name
- city
- price
- wifi
- breakfast

We'd probably like to allow user to choose which apartments they want to see on map based on some of those attributes. 
For example, someone could probably just want to see apartments that are located in NYC and whose price is not over 100$ per night. Then a user could be even more specific and say that she wants only apartments that have wifi network. 

So now when we know some possible use case let's try to implement it. 
For the purpose of this example I will use technologies I've already been writing about - LeafletJS, PostGIS, (Geo)Django. 
Although I'd really like to explain all possible steps here I just can't because it takes to much time. So I assume that you already know how to set up a Django project and create simple map using LeafletJS. If you don't then please check some basic LeafletJS tutorials and read one of my previous articles http://ipasic.com/article/let-user-add-point-map-geodjango-leaflet/ to figure out how to set up Django project with PostGIS database.
You can also find complete source code of this simple application on my github repository .

So let's first create model ApartmentsNY


from django.contrib.gis.db import models
from django.contrib.gis import admin
from django.utils import timezone
from autoslug import AutoSlugField
from easy_thumbnails.fields import ThumbnailerImageField

def image_upload_folder(instance, filename):
    return "apartments_images/%s" % (filename)

choices_city = (        
    ('New York City', 'NYC'),
    ('Syracuse', 'Syracuse'),
    ('Buffalo', 'Buffalo'),
    ('Rochester', 'Rochester'),
    ('Yonkers', 'Yonkers')

class ApartmentsNY(models.Model):
    name = models.CharField("Name of the apartment", max_length=50)
    slug = AutoSlugField(populate_from='name', unique=True) 
    city = models.CharField("City", max_length=70, choices=choices_city)
    price = models.IntegerField("Price per night [$]")
    wifi = models.BooleanField("WiFi", default=False)
    breakfast = models.BooleanField("Breakfast", default=False)
    image = ThumbnailerImageField(upload_to=image_upload_folder, blank=True)

    geom = models.PointField(srid=4326)

    def __unicode__(self):
        return self.name

    class Meta:
        verbose_name_plural = "Apartments in NY"

admin.site.register(ApartmentsNY, admin.OSMGeoAdmin)

After you've created model ApartmentsNY go to your admin interface and add few test data. I've added 6 apartments for purpose of this example (you can add as many as you want). 

Now when we have added few apartments let's create simple map where we will first display them all.

**Note: I won't explain each part of code here but if you have problems with understanding it then please ask for help in comments or send me an email. Also, please visit github repo to see complete code and getter better understanding of it.


/* Define base layers */
var cycleURL='http://{s}.tile.thunderforest.com/cycle/{z}/{x}/{y}.png';
var cycleAttrib='Map data © OpenStreetMap contributors';
var opencyclemap = new L.TileLayer(cycleURL, {attribution: cycleAttrib}); 

var osmUrl='http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png';
var osmAttrib='Map data © openstreetmap contributors';
var osm = new L.TileLayer(osmUrl, {attribution: osmAttrib}); 

/* create new layer group */
var layer_apartments = new L.LayerGroup();
var array_markers = new Array();

/* create custom marker which will represent apartments in layer 'layer_apartments' */
customMarker = L.Marker.extend({
   options: { 
      title: 'Name of the apartment',

/* define function which adds markers from array to layer group */
function AddPointsToLayer() {
    for (var i=0; i<array_markers.length; i++) {

/* Get all apartments from DB and add them to layer:_apartments */
    url: '/map/get-apartments/',
    type: 'GET',
    success: function(response) {
        $.each(eval(response), function(key, val) {      
            //fields in JSON that was returned          
            var fields = val.fields; 

            // parse point field to get values of latitude and longitued
            var regExp = /\(([^)]+)\)/;
            var matches = regExp.exec(fields.geom);
            var point = matches[1];
            var lon=point.split(' ')[0];
            var lat=point.split(' ')[1];

            //function which creates and adds new markers based on filtered values
            marker = new customMarker([lat, lon], {
                title: fields.name,
                opacity: 1.0  
            marker.bindPopup("<strong>Name: </strong>" + fields.name + "<br><strong>City: </strong>"
                + fields.city + "<br><strong>Price: </strong>"+ fields.price);

        // add markers to layer and add it to map

/* create map object */
var map = L.map('map', {
    center: [41.75, -74.98],
    zoom: 7,
    fullscreenControl: true,
    fullscreenControlOptions: {
        position: 'topleft'
    layers: [osm, layer_apartments]

var baseLayers = {
    "OpenCycleMap": opencyclemap,
    "OpenStreetMap": osm

var overlays = {
    "Apartments in NYC": layer_apartments

L.control.layers(baseLayers, overlays).addTo(map);

As you can see, I've added 2 base layers to map (OpenStreetMap and OpenCycleMap) and one overlay - layer_apartments which contains all apartments from our database. To get all apartments from our database we use AJAX call to send get request to our server. Received data is in JSON format so we first need to parse that data and then create map markers from it. Please have in mind that to make this work you first had to define view in you views.py which is responsible for serializing data and retrieveing it in JSON format.
I always use DjangoRestFramework when I need to implement some kind of REST API so if you want follow this example then please install it in your virtualenv (all dependencies for this app can be found in 'requirements.txt' on my github repo)


from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from django.core import serializers
from website.models import ApartmentsNY

def get_apartments(request):
    result = ApartmentsNY.objects.all()
    data = serializers.serialize('json', result)
    return Response(data, status=status.HTTP_200_OK, content_type='application/json')

So, now when we have created our map (with all apartments displayed on it) let's try to implement simple map filter.

Although it doesn't matter how and where on your map you will create your 'div' element with select options for filtering, I really like Leaflet sidebar plugin . It's easy to add on your map and beside it, it has nice responsive design so it's really great to use. 

After you have added sidebar to your map and defined all select options within it let's see how filtering works.
It really doesn't make sense to copy and paste here all my code so please check it on github repo to get better idea how it works. I will show you only 2 most important things here. 
First, let's see function which is called each time when user filters map objects using sidebar filter.

/* getResult function is called every time when user filters map objects using sidebar filter */
function getResult() {
    // fetch value of all filter fields
    var selected_city = $("#select_city").val();
    var selected_price = $("#slider_price").data("value");
    var boolean_wifi = $("#select_wifi").val();
    var boolean_breakfast = $("#select_breakfast").val();

    // get fields where value is not 'all' so that you later filter only those fields
    var fields = new Array();

    if (selected_city !== 'all') {

    if (boolean_wifi !== 'all') {
    if (boolean_breakfast !== 'all') {

    // price field doesn't have value 'all' so it will be filtered in any case

    /* ajax call to get all apartments with defined filter values */
        url: '/map/apartments/filter/',
        type: 'GET',
        data: "city=" + selected_city + "&price=" + selected_price + "&wifi=" 
        + boolean_wifi + "&breakfast=" + boolean_breakfast+ "&fields=" + fields,
        success: function(response) {
            // first delete all markers from layer apartments

                   $.each(eval(response), function(key, val) {      
                    //fields in JSON that was returned          
                    var fields = val.fields; 

                    // parse point field to get values of latitude and longitued
                    var regExp = /\(([^)]+)\)/;
                    var matches = regExp.exec(fields.geom);
                    var point = matches[1];
                    var lon=point.split(' ')[0];
                    var lat=point.split(' ')[1];

                    //function which creates and adds new markers based on filtered values
                    marker = new customMarker([lat, lon], {
                    title: fields.name,
                    opacity: 1.0  

                // add markers to layer and add it to map

So, the basic idea is to first get values of all fields and then make get request to server (using AJAX) to get only apartments filtered by those values. Then we simply delete all previous map markers and display again only the ones which we just received.

Here is view function which is used to filter model objects and retrieve them as a JSON back to the client.


from rest_framework import status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from django.core import serializers
from website.models import ApartmentsNY

def apartments_filter(request):
    request_data = request.QUERY_PARAMS
    filtered_fields = request_data['fields']

    kwargs = {}

    if "city" in filtered_fields:
        kwargs['city'] = request_data['city']
    if "price" in filtered_fields:
        price = request_data['price'] # e.g (150, 400) 
        price_values = price[1:][:-1].split(',')
        min_price = price_values[0]
        max_price = price_values[1]
        kwargs['price__range'] =  (min_price, max_price)
    if "wifi" in filtered_fields:
        kwargs['wifi'] = request_data['wifi']
    if "breakfast" in filtered_fields:
        kwargs['breakfast'] = request_data['breakfast']

        result = ApartmentsNY.objects.filter(**kwargs)
        data = serializers.serialize('json', result)
        return Response(data, status=status.HTTP_200_OK, content_type='application/json')
        return Response(status=status.HTTP_400_BAD_REQUEST)

As you can see, we used Python kwargs to dynamically build queries and filter our model objects. At the end we serialized data and returned it in response as JSON data.

One good thing about all this is that if you already use GeoDjango then you can do many spatial queries too using built-in filters. For example, you can geolocate user and then look for the closest apartment to his current location. However, that would probably be too much for this blog post, maybe I will write about it some other time.

I hope you understand the basic idea here. I really wished if I could go into more details and explain it somehow better but it takes too much time so if you have any questions about this please feel free to ask in the comment section below.
Also don't forget to check github repo with all source code. Feel free to clone the code and play with it.


Serve Map Tiles Using Tile Map Service (TMS) in Ol3

[Archived Version] □ Published at Ivan Pasic Blog posts

In this blog post I will try to explain you what is Tile Map Service (TMS) and how does it work. To get better idea of it I will show it how you can use it on some real example with OpenLayers3.

What is TMS?

On OpenStreetMap Wiki you can find this definition:
"TMS (Tile Map Service) is a protocol for serving maps as tiles i.e. splitting the map up into a pyramid of images at multiple zoom levels."

So, the basic idea is that you have many different images (tiles) which are loaded when they come into view. On smallest scale level there is only one tile and when you zoom in one level there are 4 tiles and so on. So for each area which is overlayed by an image there are 4 more higher resolution images which are displayed when user zoom in that area.
This approach is used by GoogleMaps, Bing and OpenStreetMap too.

Image taken from: R.García, J.Pablo de Castro, E.Verdú, M.J.Verdú and L.M.Regueras: Web Map Tile Services for Spatial Data Infrastructures

Maybe you think "why should I use TMS when I already know how to serve my maps using WMS"Well although WMS is great way to server you map, in some cases it's just not the best solution.

Main advantage os TMS is that it's very fast and that's because tiles are pre-rendered on the server side. Often times this can reduce waiting time for data and your map is delivered to visitors much faster. Beside it, it doesn't require any extra software (like Geoserver or Mapserver), it provides better reliability and beside it, scaling will be much easier.
However, I'm not saying it's perfect solution and I don't advise you to use it for every use case. For example, disadvantage of TMS is that you will probably have large data to store and beside it, if your map data is often changing it would be harder to update it.

Anyway, enough talking, let's try to see how it works on our example.

TMS Layer Example

For this purpose we will use data from NaturalEarthData so please go to this link and download zip file with Natural Earth I data (large size). 
When you unzip it you should see 5 files within it but probably most important for us is NE1_HR_LC.tif file. It's 21,600x10,800 pixels image which we will use to create our map tiles.

Now, before you continue please first make sure that you have GDAL installed, otherwise you won't be able to follow this tutorial.
GDAL is our friend and you will probably use it most of the time when you work with raster data.

So, first let's get some info about our image with 'gdalinfo'

gdalinfo NE1_HR_LC.tif 

If everything is well you should see output with information about image size, coordinate system, pixel size, ...

As you can see coordinate system of this image is WGS84 (EPSG code is 4326).
Before we continue let's first try to reproject it to Spherical Mercator projection (EPSG code is 3857) which is used by most web mapping applications.
It can be done using GDAL command 'gdalwarp' which is used for image reprojection.

Important note! Gdalwarp sometimes creates output images which has much bigger size then the input images.

So, to avoid that we will gdalwarp to VRT first and then after that run gdal_translate with -co compress=lzw option

gdalwarp -t_srs EPSG:3857 -of vrt NE1_HR_LC.tif natural_earth.vrt
gdal_translate -co compress=LZW natural_earth.vrt natural_earth.tif

Now when we have our natural_earth.tif image in new coordinate system let's generate our map tiles from it.

To create our map tiles we will use GDAL2Tiles utility which generates directory with small tiles and metadata following the OSGeo Tile Map Service Specification.
If you don't like your command line so much then you can also do it using MapTiler application. It's very nice product developed by Petr Pridal from Klokan Technologies. It also uses gdal2tiles but it has nice GUI which makes you whole process easier. However, free version of MapTiler doesn't support tiling of large images and beside it, it has some other small disadvantages so please try to follow this example using command line tools.

So, let's try to create our directories with map tiles by running:

gdal2tiles.py natural_earth.tif 

*Please be patient, this process can last longer then you may expect (especially if size of the input image is very high)

If everything went well you should now see newly created directory with name natural_earth.

Now let's explain that directory structure.
There are 6 main directoris (0-5) where name of the directory is actually the number of the zoom level. In each of those folder there is one or more directories which name is column number of the tile. Inside each of that folder you can find one or more images which name is row number of that tile.
So, for example, tile that has this path natural_earth/5/4/3.jpg is the tile that is 4th from the left and 3rd from the bottom for zoom level 5

Important note!
Tiles which are generated using GDAL2Tiles are always created following TMS schema. However, if you use MapTiler to generate your map tiles then there is a difference because MapTiler creates XYZ/WMTS tiles instead of TMS tiles.
Please visit this link http://www.maptiler.org/google-maps-coordinates-tile-bounds-projection/ to get better idea of how those two types differ.
You will notice that number of column is same for both schemas but row number (y) is different. 
So the only difference between the two is a flipped y coordinate.
You can express that relation using this formula:  y = (2^z) - y - 1

Now when you now how your tiles are organized we can create simple ol3 map with TMS layer with Natural Earth data.


<!DOCTYPE html>
    <title>TMS Layer example in ol3</title>
    <meta charset="utf-8"/>
    <meta name="viewport" content="initial-scale=1.0, user-scalable=no"/>
    <script src="http://ol3js.org/en/master/build/ol.js" type="text/javascript"></script>
    <link rel="stylesheet" href="http://ol3js.org/en/master/css/ol.css" type="text/css">
      html, body, #map {width:100%; height:100%; margin:0; padding:0;}
    <div id="map" class="map"></div>
      var extent = ol.proj.transform([-180,-85.051129,179.976804,85.051129],
                                     'EPSG:4326', 'EPSG:3857');
      var center = ol.proj.transform([-0.011598000000006436, 0],
                                     'EPSG:4326', 'EPSG:3857');
      var map = new ol.Map({
        layers: [
          new ol.layer.Tile({
            source: new ol.source.XYZ({
              tileUrlFunction: function(coordinate) {
          if(coordinate == null){
            return "";
          var z = coordinate.a;
          var x = coordinate.x;
          var y = (1 << z) - coordinate.y - 1;
          return 'natural_earth/' + z + '/' + x + '/' + y + '.png';
              extent: extent,
              minZoom: 0,
              maxZoom: 5
        renderer: 'canvas',
        target: 'map',
        view: new ol.View({
          projection: 'EPSG:3857',
          center: center,
          zoom: 1
      map.getView().fitExtent(extent, map.getSize());

DjangoCon Ticket Giveaway!

Aug 14 2014 [Archived Version] □ Published at Caktus Blog

Caktus is giving away a DjangoCon ticket valued at $850! DjangoCon is the main US Django conference and it’s returning to Portland this year, August 30 - September 4th. Meet fellow Django developers, learn what others are doing, and have a good time! To enter the giveaway: (1) follow us @caktusgroup and (2) retweet our...

Introducing Amygdala, a JavaScript REST client

Aug 14 2014 [Archived Version] □ Published at Lincoln Loop

We’ve been working on a new UI and front-end architecture for our communication tool, Ginger. In doing so, we built a new JavaScript library to communicate with our django-rest-framework powered API(s).

Inspired by hood.ie but aimed at custom API back-ends, Amygdala was born out of the desire to reduce the complexity involved with managing multiple JavaScript modules (controllers or models/collections) that do essentially the same thing, fetch and sync data.

The result is a single module where you define your schema, API settings and you're done.

var store = new Amygdala({
  'config': {
    'apiUrl': 'http://localhost:8000',
    'localStorage': true
  'schema': {
    'users': {
      'url': '/api/v2/user/'
    'teams': {
      'url': '/api/v2/team/',
      'orderBy': 'name'
    'discussions': {
      'url': '/api/v2/discussions/'
// GET
var users = store.get('users');
store.add('teams', {name: Lincoln Loop, 'active': true});
// PUT
store.update('users', {'url': '/api/v2/user/32/', 'username': 'amy82', 'active': true});
store.remove('users', {'url': '/api/v2/user/32/'});

Whenever one of the above methods are called, the resulting response data is stored on a local cache, which is very handy when you want to minimize network requests, such as in mobile and realtime applications.

To access the cached data, there are only a couple methods you need to know about:

// Get the list of active users from the local cache
 var users = store.findAll('users', {'active': true});

 // Get a single user from the local cache
 var user = store.find('users', {'username': 'amy82'});

We currently have very basic offline read-only support through localStorage. One way we’re using it is to load the cached data and render our app immediately while fetching the new data in the background.

// get data from the cache (on the view’s render method)
var discussions = store.findAll('discussions', {'team': this.props.team.url}) || [];

// fetch the most recent discussions for the team
store.get('discussions', { 'team__slug': this.props.team.slug}).done(function() {
    // re-render our view

Or you can simply listen to changes, which will be triggered when you call the get, update, add or remove methods.

store.on('discussions:change', function() {
   // re-render our view

We’re far from done and we’d love your feedback and/or contributions. If you’re interested in Amygdala, download it from npm, check out the Github repo or come say hi at #lincolnloop on Freenode.

DjangoCon US 2014 program announced

Aug 14 2014 [Archived Version] □ Published at The Django weblog

It's taken longer than we would have liked, but the DjangoCon US 2014 program is now official. In September, we'll be treated to 40 talks, ranging from case studies on Django use in unusual industries, deep dives into technical aspects of using Django and it's ecosystem, to the social aspects of being involved in a technical community. Tickets are still available (including earlybird tickets for another day or so), and if the program is any indication it's going to be an amazing show.

However, we know that attending DjangoCon US can be an expensive exercise, and as a result, some people may not be able to attend due to financial hardship. To ensure that as many people as possible can attend DjangoCon US, the Django Software Foundation is announcing a travel grant program for the event. If you require financial assistance to attend DjangoCon US, fill in this form. We can't guarantee that we'll be able to satisfy every request for aid, but we'll do our best to make sure as many people, from as diverse a background as possible, can attend. Applications are open until August 22; we'll aim to notify successful applicants by August 25.

See you in Portland in a few weeks!

Hola, Argentina!

Aug 13 2014 [Archived Version] □ Published at pydanny under tags  argentina django eventbrite pyladies python

I'll be arriving in Argentina on August 14th. I'll be speaking at PyDay Mendoza on August 15th, and sprinting on August 16th on something Audrey and I named the "new library sprint" (more details to come). On August 22nd, I'll be speaking at the combined Buenos Aires Python, Django, and PyLadies Argentina meetup. Between the events I'll be working from the Eventbrite/Eventioz Argentina office.

I'm extremely excited. I've wanted to go to Argentina for years. It's a beautiful country filled with magnificent culture, lovely architecture, great food, superb wine, and wonderful people. Speaking of which, I can't wait to finally put faces to people I've gotten to know over the internet.

Hasta pronto!


Technical documents should be reviewed by technical people

Aug 11 2014 [Archived Version] □ Published at David Grant's blog under tags  management software engineering

Just recently, I was implementing a protocol at work. The protocol was handed to me in the form of a Word document and it appeared to have already cleared some approval stages. The protocol seemed over-engineered to me. It used HTTP POST with XML in the request and response bodies. There was actually only one message type and it seemed like it could have easily been implemented with a simple GET request and no response body (just an HTTP response with a proper response code). The document was 24 pages long when it only required 1. It had about 10 error response message types, about half of which had to do with XML errors. I looked at who reviewed the document and it was as follows:

  • VP Engineering
  • Technical Director
  • Director of Engineering, Client Engineering
  • Director of Engineering, Host and Infrastructure Engineering
  • Engineering Manager, Quality Engineering

The document was written by an Engineering Manager. Missing are Principal Engineers and most importantly, Senior Developers, like myself, who will be implementing the protocol. It's not even necessary to consult the people who will be implementing the protocol, but you need to consult with people who understand HTTP, have experience with using web protocol, and an understanding of programming. I would say that one of the above reviewers/authors would qualify, but that is just one persons. I am sure that if this was shown to three other people who aren't managers, directors, or VPs, they would say this protocol is over-engineered. If even one person thinks it's over-engineered. And over-engineering things is costly. It adds complexity on both sides of the protocol which means more development, more bugs, more testing.

Starting with django-allauth

Aug 11 2014 [Archived Version] □ Published at GoDjango Screencasts and Tutorials

There are a lot of ways to do authentication in django. You can do social authentication and/or django.contrib.auth authentication. Generally they are separate, but with django-allauth you can combine them both into one package. It even gives you a great jumping off place with plenty of other features. In this video learn how to start using django-allauth.
Watch Now...

Serve static and uploaded from separate locations in Django on Heroku

Aug 11 2014 [Archived Version] □ Published at michalcodes4life under tags  amazon s3 angularjs cdn configuration django

Recently, I was faced with a problem of serving AngularJS templates from a CDN. I wanted to serve everything (‘static’ and ‘media’) from Amazon S3, but got stuck on Angular’s insecurl error. I have tried the whitelist approach, even at the end disabling sce all together, but nothing really helped. I still don’t know exactly […]

Security advisory: remove_tags safety

Aug 11 2014 [Archived Version] □ Published at The Django weblog

We've received a report that the django.utils.html.remove_tags() function and the related removetags template filter do not correctly strip obfuscated tags. In particular, they doesn't work recursively, so the output of remove_tags("<sc<script>ript>alert('XSS')</sc</script>ript>", "script") won't remove the "nested" script tags.

If you are using the remove_tags() function or the removetags template filter on user provided input in your projects, please review your code and ensure that you never mark their output as safe, without escaping it first. If you are using the output of remove_tags() in a template or if you are using the removetags template filter, escaping is the default and safe behavior unless you have disabled automatic escaping in your templates.

We plan to deprecate these functions in a future version of Django rather keep around functions whose names are likely to lead to their use in security-sensitive contexts where they are not actually safe.

This issue was reported to security@djangoproject.com by Yoann Ono Dit Biot. We thank him for taking the cautious approach of privately reporting this issue, rather than logging a public ticket in Django's Trac instance. Please see our security policies for further information on reporting security issues.

DjangoCon US Closing Keynote Announced

Aug 06 2014 [Archived Version] □ Published at The Django weblog

Today DjangoCon US announced the second of 2014's keynote speakers, Daniele Procida. Daniele's achievements in the Django community include the development and presentation of "Don't be Afraid to Commit" workshops encouraging engagement with the open source (and specifically the Django) developer community. These practical sessions include instruction on how to get the most out of the combination of pip, virtualenv and git, essential to any modern Python user. With over 600 individual commits to Github projects in the last year Daniele is clearly an open source devotee.

Daniele's talk, All You Need is L***, will attempt to place technical development in a larger context. As members of a community, it's important to be aware of the sometimes-invisible conditions (invisible particularly to those who already enjoy them) of participation and success, and the ease with which good fortune can be construed as what we deserve.

Daniele will be discussing some of the things that make it possible for an individual to participate and succeed in our industry, and our community. Are they fair? Can they be made fair? And what are we in the Django community doing about it?"

This closing keynote is certain to give you food for thought, and will doubtless be the subject of many conversations in the sprints and for delegates on their way home from DjangoCon US.

Math Games

Aug 05 2014 [Archived Version] □ Published at Alex Gaynor

When I was in the third grade my friend Alexander and I invented a math game (whereby invented I mean, "I'm unable to precisely track down the origins of this game, so I'm assuming we created it entirely on our own"). In this post I'm going to describe how to play the game, and why I think it was really a really excellent tool for teaching several skills.


To start with, you need a deck of cards, we used some special math cards where the number of cards with each value were not evenly distributed, and the cards went 1-20 (possibly zero was included). Since most people don't have access to these, a deck of playing cards should work in a pinch, note that it's not really important to have a complete deck, or only one deck, and the faces don't matter, so it's fine to mix-and-match decks from that drawer where you accumulate partial card decks.

To start, you deal a row of 4 cards, face down in a line between the two players:

    Player one

C     C     C     C

    Player two

Then, on each side of the stacks in the middle, you deal a pile of 4 cards, face down. (Meaning each player has 4 stacks of 4 cards, each of which is associated with one of the cards in the middle):

    Player one

4C    4C    4C    4C

C     C     C     C

4C    4C    4C    4C

    Player two

To start the game, each player flips two of the cards in the middle over, simultaneously.

Then players look at each of their stacks, and for each one they must find a series of arithmetic operations which lead to the value of the corresponding card in the middle. For example, if my target was 7, and I was dealt 2, 4, 6, 8, I might find: (8 - 4) + (6 / 2). Each number must be used exactly once, and any binary operators are legal (at the time we only knew about addition, subtraction, multiplication, and division, but if you can find a use for logarithms or exponentiation be my guest).

Once a player has found a series for all 4 of their decks, they tell their opponent, and then they explain the series of operations they used for each target. If a player has forgotten one of the solutions, or made a mistake, both of them return to trying to find solutions.

If finding a solution seems impossible, a player can show their cards to their opponent and both of them can think really hard about if it's possible.

Why I like this

I believe this game was an important tool in developing my arithmetic skills at an early age. It teaches a few skills:

  • Arithmetic: Obviously you have to be able to do computations quickly and correctly in order to do well at this game.
  • Estimation: One of the tricks to quickly processing all the possible operations you could perform is to look at the scale of numbers, for example if I'm targetting a 2, and in my hand I have a 7 and an 8, I would never contemplate 7 * 8, because I know it will be difficult to get back down from 56.
  • Cooperation: While it is something of a competitive game, when a player is stuck and thinks a solution is impossible to find, the other player helps them. Further, at the end of a round, a player always explains how they found their solutions, so the other player can learn.
  • While not a skill, as such, this game teaches that math is fun! There's no doubt in my mind that this game is more fun that the memorization of multiplication tables many students are forced to do.

OSCON 2014 & REST API Client Best Practices

Aug 04 2014 [Archived Version] □ Published at Caktus Blog

Mark Lavin, Caktus Technical Director and author of the forthcoming Django LightWeight was recently at OSCON 2014 in Portland where he gave a talk on improving the relationship between server and client for REST APIs. OSCON, with over 3000 attendees, is one of the largest open source conferences around. I sat down with him to...

django-planet aggregates posts from Django-related blogs. It is not affiliated with or endorsed by the Django Project.

Social Sharing


Tag cloud

admin administration advanced ajax apache api app appengine app engine apple aprendiendo python architecture articles asides audrey authentication automation backup bash basics bitbucket blog blog action day book books buildout business cache celery celerycrawler challenges cherokee choices class-based-views cliff cloud code coding command configuration couchdb css d data database databases debian deploy deployment developers development digitalocean django djangocon django-rest-framework django templates documentation dojango dojo dreamhost dughh eclipse education email encoding error events extensions fabric facebook family fashiolista fedora field filter fix flash flask form forms friends gae gallery games geek general gentoo gis git github gnome google google app engine gunicorn hackathon hacking hamburg heroku holidays hosting howto how-tos html http i18n image install intermediate internet ios iphone java javascript jobs journalism jquery json justmigrated linear regression linkedin linode linux mac machine learning math memcached mercurial meta migration mirror misc model models mod_wsgi mongodb mozilla mvc mysql nelenschuurmans newforms news nginx nosql ogólne open source open-source orm osx os x ottawa paas performance philosophy php pi pil pinax pip piston planet plone plugin pony postgres postgresql ppoftw private programmieren programming programming &amp; internet project projects pycon pygrunn pyladies pypi pypy python python3 quick tips quora rabbitmq rails rant ratnadeep debnath redis release request resolutions rest review rtnpro ruby science script security server setup simple smiley snaking software software development south sql ssh ssl static storage supervisor svn sysadmin tag talk nerdy to me tastypie tdd techblog technical technology template templates test testing tests tip tools tornado transifex travel tumbles tutorial twitter twoscoops typo3 ubuntu uncategorized unicode unix usergroup uwsgi uxebu virtualenv virtualenvwrapper web web 2.0 web design &amp; development webdev web development webfaction whoosh windows wordpress work