The classic range()
(and in 2.* series, xrange()
) is useful for getting an iterator of numbers. Its full signature is: range(start, stop[, step])
.
So you can do, e.g., range(5, 10) = [5, 6, 7, 8, 9]
.
Or you can do, e.g., range(6, 13, 3) = [6, 9, 12]
.
But as far as I know there’s not an easy, built-in way to iterate over a range-like set of integers defined both by the range and a number of parts desired.
An example: you want five evenly-distributed numbers starting with 1
and ending with 10
. So something like [3, 5, 6, 8, 10]
.
In my case, I had two similar use-cases. The first was the example above: a semi-arbitrary set of values in a range. I didn’t need to strictly include the endpoints in the values, but wanted a decent distribution of the values between start
and end
.
The second case was a little different, in that I wanted the range to always include the ends (it was a case of covering a whole range), but I also wanted to know how much each “step” over the range amounted to.
In the naïve version of this case, you don’t need the magnitude, as you could cheat and throw an extra piece in to account for slight differences (e.g., 11 / 3 => 1, 4, 7, 10
with the last piece being 10 through 11).
But there’s a nice way to evenly distribute the extra pieces: using rounding of the fractional value to distribute the extras.
Example:
10 / 4 = 2.5
0 * 2.5 = 0.0
1 * 2.5 = 2.5; round(2.5) = 2
2 * 2.5 = 5.0
3 * 2.5 = 7.5; round(7.5) = 8
[0, 2, 5, 8]
(In Python, round(number[, ndigits])
of n.5
goes to the even side (when using ndigits=0
or with a single argument).)
In this case, the caller could buffer the previous value and calculate the gap/step itself, but this is Python, so we might as well give it a mode to get that itself.
Without further ado, this is what I came up with:
def equal_parts(start, end, parts, include_step=False):
part_size = (end - start) / float(parts)
for i in range(parts):
part = start + round(part_size * i)
step = start + round(part_size * (i + 1)) - part
if include_step:
yield (part, step)
else:
yield chunk
It’s messier than it needs to be, due to its dual-use nature. It’s arguably cleaner to have a second function that would handle the include_step=False
case:
def equal_parts_only(start, end, parts):
for part, step in equal_parts(start, end, parts):
yield step + part
That function would remove the conditional business at the end of the original equal_parts
:
def equal_parts(start, end, parts):
part_size = (end - start) / float(parts)
for i in range(parts):
part = start + round(part_size * i)
step = start + round(part_size * (i + 1)) - part
yield (part, step)
In the stepless version, it’s got another nice property: what if you do equal_parts(0, 10, 11)
? You get: [1, 2, 3, 4, 5, 5, 6, 7, 8, 9, 10]
. That’s a nice property: getting more parts than integers in the range.
I wrote a GIMP plugin to create stepped (or random) gaussian blurs on an image. The stepless version lets me create the set of blur levels, while the step-including version lets me properly select (mostly-)even parts of the image.
Here’s an image that used this plugin containing a dual use of this function:
If anyone wants a copy of the plugin, let me know and I’ll put it on Github
or such.