Cellular automata in Processing

I am experimenting with cellular automata behaviours in Processing…computation took some time so this is just a quick time-lapse recording made with my iPhone rather than exporting single frames and converting them to hight quality video.

 

Cell[][] _cellArray;

int _cellSize = 2;
int _numX, _numY;

void setup()
{
  size(256, 256);
  _numX = floor(width/_cellSize);
  _numY = floor(height/_cellSize);

  frameRate(60);
  strokeWeight(0);

  restart();
}

void restart()
{
  _cellArray = new Cell[_numX][_numY];

  for (int x = 0; x < _numX; x++)
  {
    for (int y = 0; y < _numY; y++)
    {
      Cell newCell = new Cell(x, y);
      _cellArray[x][y] = newCell;
    }
  }

  for (int x = 0; x < _numX; x++)
  {
    for (int y = 0; y < _numY; y++)
    {
      int above = y-1;
      int below = y+1;
      int left = x-1;
      int right = x+1;

      if (above < 0) above = _numY-1;
      if (below == _numY) below = 0;
      if (left < 0) left = _numX-1;
      if (right == _numX) right = 0;

      _cellArray[x][y].addNeighbor(_cellArray[left][above]);
      _cellArray[x][y].addNeighbor(_cellArray[left][y]);
      _cellArray[x][y].addNeighbor(_cellArray[left][below]);

      _cellArray[x][y].addNeighbor(_cellArray[x][above]);
      _cellArray[x][y].addNeighbor(_cellArray[x][below]);

      _cellArray[x][y].addNeighbor(_cellArray[right][above]);
      _cellArray[x][y].addNeighbor(_cellArray[right][y]);
      _cellArray[x][y].addNeighbor(_cellArray[right][below]);
    }
  }
}


void draw()
{
  //if (millis() % 5 == 0)
  //  restart();

  background(200);

  for (int x = 0; x < _numX; x++)
  {
    for (int y = 0; y < _numY; y++)
    {
      _cellArray[x][y].calcNextState();
    }
  }

  translate(_cellSize/2, _cellSize/2);
  for (int x = 0; x < _numX; x++)
  {
    for (int y = 0; y < _numY; y++)
    {
      _cellArray[x][y].drawMe();
    }
  }
}


////////////////////////////////////////////////////////////////////////////////////////////////////////////

class Cell {
  float x, y;
  float state;
  float nextState;
  float lastState = 0;

  Cell[] neighbors;

  Cell(float ex, float why)
  {
    x = ex * _cellSize;
    y = why * _cellSize;

    nextState = ((x/500) + (y/300)) * 14;

    state = nextState;
    neighbors = new Cell[0];
  }

  void addNeighbor(Cell cell)
  {
    neighbors = (Cell[])append(neighbors, cell);
  }

  void calcNextState()
  {
    float total = 0;
    for (int i = 0; i < neighbors.length; i++)
      total += neighbors[i].state;

    float average = int(total/8);

    if (average == 255)
      nextState = 0;
    else if (average == 0)
      nextState = 255;
    else 
    {
      nextState = state + average;

      if (lastState > 0)
        nextState -= lastState;
      if (nextState > 255)
        nextState = 255;
      else if (nextState < 0)
        nextState = 0;
    }

    lastState = state;
  }

  void drawMe()
  {
    state = nextState;

    fill(state);

    rect(x,y,_cellSize, _cellSize);
  }

}

Image upload bugfix – WordPress 4.5.1

After upgrading to WordPress 4.5.1 I wasn’t able to upload images anymore. Neither from the iPad nor from my Macbook. It seems to be related to Imagick and the image size you’re trying to upload. While there is no update for WordPress yet, a small addition to the functions.php of your current theme will fix it temporally:

add_filter( 'wp_image_editors', 'change_graphic_lib' );

function change_graphic_lib($array) {
return array( 'WP_Image_Editor_GD', 'WP_Image_Editor_Imagick' );
}

 

Tutorial – Get the day of the week with NSCalendar for any NSDate

NSCalendar is a mighty class which let you do a lot of different things with NSDate. At first it might look a bit complicated but with some time it gets very useful when working with dates (especially with NSDateFormatter). As language I will use Objective-C but translating it to Swift shouldn’t be too hard.

So if you want to know which day of a week a particular NSDate is you have to create a NSCalendar to work with. To do all the date related stuff you also need a NSDateComponents object where you define how to work with the calendar. With them we create dates for a particular week in were our date is and loop through them. In every iteration we check if the created date is the same as our original date. I’ve created a function for date comparisons since I need them regularly often in my app. You basically just compare the single components like day, month, year, hour, minute,…

//Create our NSCalendar to work with
NSCalendar *gregorian = [[NSCalendar alloc]
                             initWithCalendarIdentifier:NSCalendarIdentifierGregorian];

    //Week starts on Monday in Europe! People in the US can comment the following line out.
    [gregorian setFirstWeekday:2];
    
    //get today
    NSDate* today = [NSDate date];
    
    //We need the dateComponents to do work with our NSCalendar
    NSDateComponents *dateComponents = [gregorian components:(NSCalendarUnitYear | NSCalendarUnitMonth | NSCalendarUnitWeekOfYear | NSCalendarUnitWeekday ) fromDate:today];
    
    // Loop through week
    for (int i = 2; i < 9; i++) {
        //Set the weekday and create a new date from it
        [dateComponents setWeekday:i];
        NSDate *weekDay = [gregorian dateFromComponents:dateComponents];
    
        //Compare the new date with our "today"
        if ([DateFunctions isDate:weekDay equalWith:today]) 
        {
           //Do your stuff here, day of the week is i
        }
    }


So with this little piece of code you are able to determine which day of the week your NSDate object is. I hope this will be helpful for you, if you have any questions feel free to use the comments below!

Thoughts on prototyping iOS user interfaces

I recently watched a WWDC 2014 session called “fake it till you make it” which gives insights in Apples process of interface development for iOS apps. In a nutshell the talk was about time saving and fast response ways to develop the right interface for an app from the start.

As I began on developing Stepr I’ve had a certain idea of the general look of the app but wasn’t sure about certain details. This led to decisions which weren’t the best. I didn’t know how and did not make many thoughts on how to use the given screen space so I tried to fit my “idea” of the interface into the screen of the iPhone. In the result the progress bar of the steps the user did on the day was way too big and the text which tells the exact number of steps filled a enormous amount of space too. The result was a user interface which was far away from looking gorgeous.

So where did I fail? The answer is relatively simple: Just in developing the interface in different iterations! I’ve tried to fix certain mistakes like color choice in previously releases but the interface is still not where I want it to be. For the upcoming release the UI will be completely updated. In the WWDC talk it is recommended to start on paper and try to get a rough idea on how it should look. Drawing on a piece of paper is fast and will give an immediate idea of the general look of the app. So I tried slightly different variations of the interface on paper first and voilá – while drawing I found things that would not make it look right.

some sketches of the new user interface
some sketches of the new user interface

After completing the sketches I moved on to Keynote. Keynote??? Yeah exactly, that was what I thought when I watched the talk. The guys recommended Keynote for UI prototyping for a good reason – it is fast and easy to design a interface from from a combination of screenshots of existing apps, shapes and text. And the killer feature is definitely the animation toolset. It is very fast to try different UI animations by using the magic move feature for transitioning between frames.

evolution of the ui
evolution of the ui

I’ve tried it today and have to say that I feel faster than working with photoshop, maybe because of the limited toolset (which isn’t a bad thing). After completing a design I exported it as an image and sent it to my iPhone. To see a design directly on the targeted device is priceless because you get an instant response on how it “feels” on the device itself…something very different as only evaluate it on a computer screen as I find. This method gave me the ability to quickly recognize flaws in the design because I could see which elements didn’t work well instantly. Another plus is to send the screenshots to other people and ask them several questions like “how do you like the interface and what do you do not like?”, “do you know how to use the app?” and so on. This step is still ahead of me but I will give it a try since the response of [potential] users is very important.

In conclusion I have to say that I really enjoyed to try out the tips from the WWDC talk and I think that these steps will help to design way better interfaces from the start up! Maybe it looks like wasted time at first but you will definitely find stuff that does not work so well in your design before writing any line of code which saves much more time in the end. If you do not already use this or a similar way to design your interfaces you should give it a try as soon as you can!

How to install new Xcode themes

If you are not that satisfied by the default syntax highlighting in Xcode and the pre installed themes aren’t that useful for you there is an easy way to install some new ones. At first, open Finder and navigate to:

~/Library/Developer/Xcode

(to do so press cmd+shift+g and type in the folder). now create a new folder called

FontAndColorThemes

You are now ready to color up Xcode with new themes. To find some a good starting point to search on github.com for “Xcode themes”. Typically you will find a file with the ending:

.dvtcolortheme

Copy this file to the newly created directory. If you restart Xcode now you will see your installed themes under Preferences-Fonts & Colors. If everything went fine it should look like this:

fonts_colors

Saving data with UIDocument in iOS

Reading and writing data in iOS is a crucial thing if your app depends on it. In my app Stepr (iTunes link). I’ve made the naive mistake to save all my data into a plist file with a (too) simple mechanism and made my users not very happy with this approach. Stepr is basically a pedometer which reads the data from the M7 coprocessor in the iPhone 5s, compares it to a goal set by the user and saves all steps of a day into a file (more on this in a separate post). The user can view all the recorded steps afterwards in a statistic and watch the overall progress.

So what was wrong with saving into a plist file? In general – nothing. If you when your data is loaded and ready to get saved you are (probably) fine. But for Stepr things are a bit more complicated. The app utilizes Background Fetch to update the data even when the app is not active. Stepr will be launched in the background and gets about 30 seconds to do its stuff till iOS will quit it and querying the CoreMotion framework is threaded too, so you don’t really know when updates are done. So in the worst case Stepr tried to load data, tried to update it and saved it at nearly the same time. Data got corrupted and the plist file was broken – not good!

The solution to this problem was to switch to UIDocument for file operations. But how to do it right? I asked this question myself and found a very very good tutorial from Kevin Hunter from Silver Bay Technologies on how to implement UIDocument for your file operations. And the best thing of all – the tutorial also covers unit tests and test driven development (TDD)! After working through it my document knows when loading data is done, when it has unsaved data and when the data is saved – the data management in Stepr is very robust now…awesome!

Because of this tutorial I was able to solve my problem for Stepr, learnt a lot about TDD and had also some knowledge on how to develop a mechanism to migrate from the old plist file to the new UIDocument powered file with unit tests and all the nice stuff.

I just can highly recommend to read this tutorial, it’s one of the best I’ve found so far! (Link)

making of 1 million particles

this time I will give you some insights on how to create a gpu driven particle system with opengl and glsl. for most of my opengl work I choose cinder and highly recommend to get in touch with it. already knowing cinder is not essential but gives a better understanding of the text. also since this is just a making of, not a step by step guide, some OpenGL and shader knowledge is required.

1mp_bw

before we dive into the code I think it’s good to get an overview on how the system works. the base of this particle system is a so called ping-pong framebuffer object. ping-pong means that you have two framebuffer objects (fbo) which are drawn alternately. when fbo A is drawn fbo B is used for calculations. on the next frame B will be drawn and A is used for calculations and so on. the particle movement is calculated by an glsl shader, all results (current position, velocity,…) are saved into textures. the drawing of the particles is also controlled by a shader who controls opacity and size. each particle has a time to live, if it’s old enough it will be respawned at a new position with it’s initial velocity. you see there is not that much going on, so now let’s look at the code a little bit deeper!

get some more…

1 million particles revisited

1million particles

here we go again, I reworked my gpgpu particle system which I did some time ago (link). the new version offers a better particle movement driven by perlin noise. also it is a bit more colorful since the original version was black & white only. aside from some minor code tweaks the big thing is that you can grab a copy on github now! i’ve got some requests to share the code but never felt that it is good enough to give it to others (the new version might not be that better… ;-) ). i hope it will help people to learn something about gpgpu programming and OpenGL in general :-) at the moment I am also writing a “making of” to explain some of the nifty stuff a bit more, so check the blog the next days!

github

new processing sketch — cubicle

we have updated our home with some new furniture. as a result we now have more space to hang some pictures and what is better than creating the pictures by yourself? yes right, code them by yourself!!! at least if you can’t draw.

the sketch I made today is called “cubicle” and does nothing more than drawing some quads and rotate every quad by some degrees. the result is vortex of quads :-)

cubicle

 

 

 

the code behind it is fairly simple, nothing special to explain. just some basic setup of size and settings and after that we’re ready to go. draw a lot of quads till they reach the border which is set in the condition of the while iteration. have a look:

size(1000,1000);
background(255);

smooth();
strokeWeight(0.5);
noFill();

float initialSize = 20.0f;
float rotation = 0.430f;

float strokeAlph = 30.0f; 

while(initialSize < height - 300)
{
  stroke(0,strokeAlph); 
  pushMatrix();
  translate(width/2, height/2);
  rotate(rotation);
  rect( 0 - initialSize / 2, 0 - initialSize / 2, initialSize, initialSize);
  popMatrix();

  initialSize += sqrt( 2 * pow(initialSize,2)) * .0033;
  strokeAlph += .1f;
  rotation += noise(PI / 3);
}

if you want you can grab the code (and other sketches too) on my github!