Greetings, friend — welcome back! During our last discussion in optimization, we discussed a nature-inspired routine known as Particle Swarm Optimization (PSO). Historically, PSO was one of the first nature-based optimization algorithms to come on the scene, arriving around the same time as the ant colony and Artificial Bee Colony (ABC) algorithms. During the discussion, we built a basic PSO script from scratch using the Julia language.
In this tutorial, we’ll consider another nature-inspired optimization routine called the Emperor Penguin Colony (EPC). As its name implies, the routine is based on Emperor penguins — creatures known to stay warm in frigid temperatures by huddling together to conserve body heat. It is also well known that their huddles rotate about a central point, and do so in a spiral-like manner.
So while EPC may resemble PSO in a number of ways (namely the random distribution of “particles” across an N-dimensional parameter space), the method in which they explore the parameter space is significantly different. As you’ll recall, PSO utilizes a very linear path; in our algorithm, we’ll build in a spiral movement instead. Who’s ready to get started!
In our last optimization discussion, we first generated a dataset containing both local and global maxima and minima. Let’s go ahead and grab that code. By the way, the reason we vertically flipping the dataset when we plot (
reverse(z,dims=1)) is due to how Julia indexes its y-axis values. Flipping the dataset helps us align the (x,y) coordinates of our dataframe with the orientation of our function,
z. Since we already know what this dataset looks like, we won’t bother showing the image here.
using PyPlot # Generate dummy data x = -10:0.1:10 y = -10:0.1:10 z = cos.(x'./5) .* sin.(y./3) f = (x,y)->cos(x/5) * sin(y/3) # Plot data figure() imshow(reverse(z,dims=1)); axis("off") show(); gcf()
Similar to PSO, we still want to distribute particles evenly about the two dimensional parameter space. Here, we can make use of three packages.
Random helps us set a seed value, so that our “random” particles end up placed in the same location each time. Similarly, the
Distributions package allows us to use the
Uniform function to find positions for the particles. Finally, we can store all this information in a dataframe object, utilizing the
using Random using DataFrames using Distributions # Generate initial population particles = 20 Random.seed!(2) xpos = rand(Uniform(minimum(x),maximum(x)),particles) ypos = rand(Uniform(minimum(y),maximum(y)),particles) curDF = DataFrame(X=float(xpos[:]), Y=float(ypos[:])) for i in 1:size(curDF) zval = f(curDF[i,"X"],curDF[i,"Y"]) if i==1; global zpos=[zval] else; append!(zpos,zval) end end # Store particles/values in a dataframe curDF = DataFrame(X=xpos, Y=ypos, Value=zpos) # Sort dataframe for largest value sort!(curDF,:Value,rev=true)
At this point, we have a dataframe that looks something like this — a table of X values, Y values, and the Value at each coordinate location. Our penguins have been placed. March on, ye noble birds!
Row X Y Value │ Float64 Float64 Float64 ─────┼───────────────────────────────── 1 │ -1.26896 4.18132 0.95284 2 │ 2.02622 4.67753 0.918944 3 │ -2.09138 4.99832 0.909644 ⋮ ⋮ ⋮ ⋮
Now that we’ve initialized our penguins and stored their initial locations, it’s time to determine the movement path for each. As we discussed earlier, we’re going to program in a spiral trajectory. Furthermore, we really only need to calculate the trajectory between two particles; if we can get this to work, we can just call the spiral function in each iteration and feed it the two points were interested, namely the current “best” position and the current outlying penguin position.
In addition to the start and stopping points (
p2), we only need two parameters to define the spiral: the number of points we want to define the spiral (
rez), and the number of revolutions we want the spiral to make (
rev). The basic equation for calculating the
y position of each point in the spiral is based on sines and cosines, but we also throw in an exponentially decaying term — this forces the spiral to be a logarithmic spiral, rather than an Archimedean spiral.
We also include an angular offset to make sure our spiral is tilted correctly, so that its tip meets our designated point. The
LinearAlgebra package also allows us to use the norm function, which is simply the distance between
p2. This is honestly the hardest part of the entire algorithm, since our bookkeeping and time iteration will be nearly identical to our PSO routine!
using LinearAlgebra function spiral_connect(p1,p2,rez=50,rev=5) r = norm(p1-p2) theta_offset = atan((p1-p2)/(p1-p2)) if p2>p1; theta_offset += pi; end # Radius as spiral decreases t = range(0,r,rez) # Angle as spiral decreases theta = range(0, 2*pi*rev, rez) .+ theta_offset x = cos.(theta).*exp.(-t).*r.+p2; y = sin.(theta).*exp.(-t).*r.+p2; return x,y end
…aaaaand that should do it! If we do a quick plot of everything we’ve built so far, we can construct something like the following image. The red arrow (and dot) shows the position of the penguin located at the current global best maximum we’ve discovered. We also have a current (or “moving”) penguin, identified by the yellow solid arrow. The spiral path between the two is shown with dashed orange line. Other penguins are identified as black dots, and one is pointed out by a black arrow. Finally, we see the heatmap plot of our function f(x,y) in the background.
Marching through time
So far, we’ve sprinkled our penguins across our landscape and defined their marching paths. Now all we have to do is set things in motion and keep track of what we find! As we said, this portion of the algorithm is very similar to our PSO routine, but let’s briefly explain it anyways.
For each iteration, we want to make sure we have the latest position and value data. We then sort our dataframe to make sure we have our largest (“best”) value as the first row of information. With this data in hand, we begin observing the penguins!
for t in 1:generations # Define current dataframe if t==1; global newpop=copy(curDF); else; newpop=copy(newDF); end # Sort dataframe for best parameters sort!(newpop,names(newpop)[end],rev=true) global bestPrams = [newpop[1,a] for a in 1:length(names(newpop))] # Iterate through penguins for i in 1:size(newpop) curPrams = [newpop[i,a] for a in 1:length(names(newpop))-1] # Plan waddle path for current penguin sx,sy = get_spiral(newpop,i,x,y) newPrams = [sx[end-1],sy[end-1]] zpos = f(newPrams'...) newpos = hcat(newPrams',zpos) newpos = DataFrame(newpos, :auto) # Save location and value of current penguin if i==1; global newDF=DataFrame(x1=newpop[1,1],x2=newpop[1,2],x3=newpop[1,3]) else; append!(newDF,newpos); end end end
For each penguin, we look at where it’s located with respect to the global best position, then we form a spiral between the two positions. The function
get_spiral is a simple function — so simple it is not included here. The innards of the function only make sure that the
y coordinates returned by our
spiral_connect function do not lie outside the bounds of our data window.
We then have the penguin waddle forward one. step? one waddle(?) on it’s new path, and we record its new position and value in a new dataframe. After each penguin has had a chance to waddle along its unique path, we save out the data so it can be re-defined as the newest set of data in the first line of our next time slice! Let’s fire it up and see how the penguins do.
Success! The penguins seem to be doing really well, and we can see that they indeed converge on what appears to be the global best location on our heatmap. Moreover, we see that the paths the penguins take are quite different than the linear paths our particles took in our PSO algorithm.
In this tutorial, we expanded on the PSO algorithm concept to build a penguin colony optimization routine. Similar to PSO, the function we optimized here is intuitive, but this will not always be the case with real world applications. Additionally, this exercise helped strengthen our intuition about this and other similar optimization routines, while giving us another tool in our toolkit for handling difficult data analysis in an efficient and fun manner!
So what’s next? As of this writing, there are no fewer than 249 different nature-inspired optimization techniques. While we certainly will not cover all of them extensively, it is likely we’ll look at another one or two of them — so stay tuned! But for now, thank you for dropping by! To stay updated as I post, feel free to like, comment, and subscribe. See you next time!
Get new content delivered directly to your inbox.