Tactical vs Strategic Design

Last time, I wrote some about initial design work for putting together a graph visualization library. I've made a little progress, then got stuck (mentally) on interactivity. Here's a little about that process.

The Problem

In the future, I expect this graphing library to have somewhere between "some" and "quite a few" nodes, any of which can be visible at any given time. The challenge is figuring out which, if any, are clicked on.

Simple graph

A simple graph to display, but which node gets clicked?

Solutions

Most of the libraries that I've worked with have some kind of event handler which contains information like what kind of event was generated, where and any other state data.

For example, you might get something that translates to: "The user pressed the left mouse button down 392 pixels from the left most part of the window and 107 pixels from the top." As I want this library to have interactivity, i.e., do something when you click the left mouse button, I need to be able to handle this information and correlate it with the current stuff on the screen. In my case, that stuff translates to rendered drawings of nodes and arrows, maybe with some panning behavior to enable you to look through the whole graph.

The standard way of doing this, as gathered by perusing the source code of a few pygame-based widget libraries, is to simply iterate through the various rectangular bounding boxes of objects in the scene and see if any have an intersection with the event location. As someone who dabbles with performance optimization, this struck me as an O(n) problem and therefore not fast enough.

The next thing that came to mind was to use something like a sparse quadtree that executes in something like logarithmic time, which would be much better, especially considering that I intend to eventually have some kind of pan/zoom interface and hundreds of nodes.

I hemmed and hawed about this for a bit and dug through some more source code. Someone must have already implemented this in the graphics libraries (not that I could find, per the above), but I searched anyway. I'm sure that there is a Python library for processing sparse quad trees, but I had hoped it was already integral to the widget libraries.

Then, of course, I realized that I was committing the cardinal sin of programming. I thought back on the original design constraints - developer speed, rather than anything else - and realized I was prematurely optimizing.

While switching the click/check interface from one which takes a collection of nodes (and their bounding boxes) to one which takes a quad tree (or something similar) will require some rewrite and additional state management in the future, it's still probably better than to not get there at all because of undue consideration for performance when the initial node set will probably number in the tens. In short, O(tens) is not worth worrying about when T(tens) is milliseconds at worst (and the fastest monitors I'm likely to encounter need a refresh period of about 8 milliseconds).

Conclusion

For now, I'm just going to use a linear registry (list) for this kind of stuff and treat it as semi-global state (probably encapsulated in whatever world/graph engine keeps the whole thing stitched together). I'd prefer to make it more efficient, but if there's already a function to do 'click in box' collision detection, it's worth reusing.

Thanks for reading!

Designing a Graph Visualization Library

Why, you might ask, am I looking to design a graph visualization library, when such marvels of the modern world like plotly, d3, graphviz, NetworkX and the like all exist?

I ask myself the same thing.

The main thing is that I want to implement a project that requires interactivity with a directed graph that has no guarantees of an acyclic nature. When I say interactive, I mean (e.g.) in-place editable, maybe with some fancy styling to make visual navigation simpler.

This doesn't exist, as far as I can tell.

The closest thing I can find is "write up some data traversal in Python, then ship it to JS." I don't like JS, or TypeScript. I flirted with Brython an age ago and recently discovered the beautiful world of WASM enhanced graphics through Rust. I think it would be beautiful if I could write up a library that is more or less agnostic to the language that it lives in.

In recent past, I've had good luck integrating Python and Rust through the (truly amazing) pyo3 project. While I think the final home of this library will probably be in Rust, targeting a generic canvas-like (e.g., HTML5 Canvas) API, I don't want to start there. To quote a friend and mentor I had when I mentioned prototyping in C: malevolent laughter. So, Python for prototyping.

Background Assumptions

In that vein, I know a couple things about the project at present:

  • Assume a 2D drawing target

  • Assume Python for initial development, adding Rust bindings eventually

  • Provide ways of styling and coloring both the individual graph nodes and their background

  • I want (at least) a direct graph. The data I care about is intrinsically directed. Other types (undirected, trees, etc.) are a bonus but not necessary.

  • I need each node to support interactivity:

    • Potentially expanding the graph to view subgraphs

    • Create new nodes

    • Modify properties of an existing node

    • Rearrange the current layout of nodes

This is probably a lot. Part of this post is to explore whether I can do this kind of design at a text prompt, rather than on paper (which is where I do most of my thinking).

So far, I think we're mostly in the realm of the "what", rather than the "how" (save for maybe the 2D drawing target). Next steps are to identify what's already out there, for a middle-down-up sorta design process. Since this is for personal use, rather than for someone else to implement, I can keep it fairly loose and just use these notes as a record for future reference.

I know the following stuff exists:

  • Data structures and representation in both languages (e.g., NetworkX, graphviz/DOT-alikes, petgraph, et al.)

  • 2D rendering systems (pygame, sdl2, more generic stuff like plotters)

  • Widget libraries for many (e.g.) sdl2 backends (ThorPy, pushrod)

I assume the following exist:

  • Graph algorithm implementations

I already have a basic implementation of a serde-based file I/O for the data that I want to process, which will also lend itself well to something like diesel if and when I get to that point.

Design Considerations

With the basic design information captured, what else is there? As a designer/architect, questions about -ility tradeoffs come up regularly (things like speed of execution, speed of development, maintainability, security, etc.). For now, I don't have many hard constraints across most of -ility space. It would be nice to have something that was maintainable, secure, compliant and the like, but my main thing is that the tool that I'll build with this library is useful "yesterday", so development speed wins out (another point to Python, in theory).

Since I plan on recoding this to Rust at some point in the future, as long as I'm conscientious about types and safety at this point, I shouldn't get bitten too badly by the borrow checker and compiler, later.

Other than that, the only other thing that comes to mind is that the visualizer might end up targeting several different backends. At present, this will probably be just sdl2, but I think I'll want to WASM this in the future. As such, I'll try to avoid any super-tricky/custom code, but if it happens to work for now so be it; we can always come back to the problem later.

Initial Design

With all that in mind, I think the next steps are to start cracking out some data structures that represent the important knobs that I want to fiddle:

  • Text

  • Color

  • Decoration(s)

  • Shape(s)

  • Relationship(s)

Of these, the only one worth commenting on is relationships. I see this as a set of connections to other nodes (either some kind of ID or the object directly, depending on how much I want to fight the borrow checker later). Arrows between nodes will probably get their own styling, later, but should be captured as a top-level concept at the moment.

In addition to these, there will probably be some hidden or second class values that, while still important, aren't something I'll manually configure:

  • Location

  • Size/Scale (I think everything the same size/scale will be good to start)

  • Opacity

  • Border thickness/stroke width

Plus, there will be some global parameters I think I'll tinker with then hard-code as module constants until that becomes insufficient:

  • Layout technique (i.e., energy equation(s) for automatic node layout)

  • Canvas size

  • Relationship arrow styles (thickness, arrow head, curviness)

  • Basic node styling (stroke thickness, opacity, etc.)

  • Background fill color (initial, but should be configurable)

  • Window scaling process/technique (e.g., does each node scale with canvas size, or does the canvas scale and the user has to manually change visualization scale to "zoom in"?)

Conclusions

This was interesting. It's not too dissimilar from the process by which I do initial design. Normally, I'd now put this out for review and get some gut-checks for what's totally bonkers about it. As I'm working this solo at the moment, I'm going to inspect how the development speed of such balances against the ongoing filtering/feedback mechanisms I usually encounter on a team.

If you have read this and made it to this point, cool! I hope you got something out of it and thanks for reading. I've started to consider how to communicate out, but haven't quite sorted it yet. When I do, you'll find out :)

Welcome Back

I'm great at finishing things, at least, up until now!

It's a new year and a new attitude. My plan is to start pumping out content again - at least a sorta weekly affair, until I figure out a new direction. Not sure if I'll finish up the testing tutorial, but it'll go on the heap for future sorting.

I wanted to get something back out there - trying the "open up the terminal" approach to content creation rather than the "build the project to completion" one. Cheers! 🎉

Intro to Unit Testing

Welcome back!

Last time, I introduced the concept of entity/component/system (ECS) using Esper. I intended to introduce unit testing concepts, but realized about halfway through that an introduction to ECS itself merited a full post. This is the second part, wherein I hope to show unit testing.

What is a Unit Test?

A brief disclaimer: Wikipedia does a great job introducing these concepts. It will be better than I can do.

What I will do is show Python examples and sprinkle prose around them to make the topic more accessible.

To that end, a unit test is a snippet of code which proves that another snippet of code demonstrates some behavior. I introduce two terms to better define "behavior".

When writing code, there are often several things that the code is intended to do. First, it should "do something." In our ECS toy, this something is to simply move an object's center somewhere in a world based on its velocity. This is what I define as a "positive" behavior; if all goes well, the object should move according to some algorithm.

Positive Testing

In general unit testing, this is the only kind of test that gets done. While this isn't useless by itself, it's not comprehensive. I plan on getting into how to measure that quality of "comprehensiveness" (hint, it's not just code coverage), but for now suffice it to say that there's more than just "if I put in the right inputs, the code does what I want it to."

In a trivial example, consider the / operator in Python. In Python 2, this followed this behavior:

>>> 3.0 / 2.0
1.5

Pretty obvious, right? Three-point-oh divided by two-point-oh is one-and-one-half. If we're writing unit tests for some method, call it divide_two_nums as:

def divide_two_nums(first, second):
    # type: (float, float) -> float
    """
    Divide two numbers and return the result.

    :param first: The numerator of the division
    :param second: The denominator of the division
    :returns: The result of dividing first by second
    """
    return first / second

Take note that we provided Python 2 style type hints which indicate that we intend to accept two floating point numbers (float) and return one. We can then define a test which mimics our little sample, as so:

def test_div_two_floats():
    # type: () -> None
    """Verify that dividing two floats produces the right result."""
    assert divide_two_nums(3.0, 2.0) == 1.5

We'll get more into the syntax of pytest testing in a bit, but consider this to be an example of a unit test verifying positive behavior of a function. It's positive in the sense that we put in two values that we know the correct result for and expect the code to do the right thing.

Negative Testing

This contrasts with negative testing, where input that is reasonable to expect, but not "correct" in some way is put into the system under test and the results are verified. There are a few situations where this testing approach is worthwhile.

Boundary Testing

The first and most important is on the "boundary" of your system. The classic example of this is in a user-facing application. Users are notorious for their cleverness; they will exercise your system in ways that mere mortals like I cannot comprehend. In a typical application, it's important to verify what happens when your users put in data that isn't "correct."

This goes for any system boundary, whether network, UI/UX 1, sensor input or even client developers if you're writing a code library. If there are critical parts of your system that will fail if incorrect input goes in, testing what kind of failure and/or how recovery methods work is just as essential as making sure the system does the right thing when given good input.

An example, using our contrived function. Assuming Python 2, what happens when we break the contract of "two floats in"? Remember, Python is strongly but dynamically typed, so even if you suggest that floats should be input (as in the case of type hints or docstrings), there isn't any enforcement that you don't write.

>>> divide_two_nums(3, 2)
1

This "issue" has been covered elsewhere. I bring it up because it demonstrates how a negative test might be useful to at least show how a system fails if a contract is broken. If you're writing a library that is expected to be "mission critical" or otherwise involved in safety, health or is of large economic importance, this might be the place where you don't want to let a 50% "error" through. It can serve to document the failure methods of the call for other users or provide insight in to when to clarify the spec you're working to. To address this "problem" you could change the implementation to something like:

def divide_two_floats(first, second):
    # type: (float, float) -> float
    """
    Divide two floating point numbers and return the result.

    :param first: The numerator of the division
    :param second: The denominator of the division
    :returns: The result of dividing first by second
    :raises TypeError: if either first or second are not floats
    """
    if not isinstance(first, float) or not isinstance(second, float):
        raise TypeError("Both inputs must be of type float.")
    return first / second

with a corresponding negative test like:

def test_div_two_floats_ints_err():
    # type: () -> None
    """Verify that an error is raised when non-floats are used."""
    with pytest.raises(TypeError):
        divide_two_floats(3, 2)

This assumes that it's critical to have the inputs be of the correct type 2. If not, a simple test like this one (assuming the first implementation) serves as a handy document in your codebase about how the code is intended to work:

def test_div_two_ints():
    # type: () -> None
    """Demonstrate behavior when ints are used in divide_two_nums."""
    assert divide_two_floats(3, 2) == 1

I mentioned that there were multiple places where negative testing is useful. As you work with a codebase, others will emerge, but the second worth mentioning is when you're working with a more complex system.

Error Reporting

The other place where negative testing comes in handy is in error reporting.

Complex systems that have interlocking pieces often have complex interactions between those pieces. Even if they don't, they often depend on some subsystem that has the potential to misbehave, e.g., computing hardware or sensors. In either case, the bigger a technological system gets, the more likely it is to encounter failure 3. Negative tests then help verify system behavior under exceptional or unexpected cases. Tests of error reporting mechanisms can be considered positive tests of failure modes, but the general idea is that by thinking of and applying negative testing principles, more thorough and stable systems can be built.

Unit Testing Concept Summary

To come back to it, unit tests are a way of verifying a small piece of behavior. Positive tests check that a system works with good input; negative tests show how a system fails when it doesn't get good input.

The last thing to say about unit tests is that they target a single part of the code and a single part of the system. I plan on exploring the spectrum of functional, unit, integration, property-based and mutation tests, but here's a quick litmus for determining if a test is a unit test.

A unit test is a unit test if it:

  • Only tests a small method or function

  • Relies on no external systems 4 to function

Even shorter, a unit test only tests one and only one thing at a time.

ECS Unit Testing

After that long-winded intro, let's get back to our code.

The main issue with our previous test is that it used the Esper world framework to conduct the test. That is, there was no isolation of the individual components of the code during the test - we used the add_component, add_processor and add_entity methods. While this does test our code, it also implicitly tests the Esper code as well. This violates the Zen of Python by having an implicit test in addition to our explicit one.

Next, we're going to add some explicit tests.

Data Structure Testing

While these tests could be considered superfluous, they'll help make concrete the concepts of positive and negative testing.

We don't have a spec for this - I should write one - but we have some implicit behavior. Our two components, CenterOfMass2D and Velocity2D are supposed to represent two dimensional position and velocity for some object. We want them to store these values and we expect them to be mutable. We expect that they contain some initial value.

We also have an implicit assumption that their components won't be updated in a dependent fashion. I have a lot to say about this, but it's way too much for now. Suffice it to say, we expect

>>> my_com = CenterOfMass2D(0.0, 0.0)
>>> my_com.pos_x_m = 1.0

to have no effect on my_com.pos_y_m. Finally, we have an expectation that we can initialize the darn thing without anything bad happening.

Sanity Test

Let's show that all of these assumptions and expectations are correct. To begin, just verify that if you put in values for X and Y nothing bad happens:

1
2
3
def test_com_init_no_err() -> None:
    """Verify instantiation of the center of mass component causes no errors."""
    CenterOfMass2D(1.0, 1.0)

I can hear the detractors now - what does this prove? If this test passes, it means that it's possible to construct an instance of CenterOfMass2D without anything going awry. This isn't needed with a library that you trust, but if you're working on a team or you're just learning how to use a new codebase, this can provide a basic sanity test that you've got everything installed and are calling things correctly.

A better way of putting it is this; this is equivalent to "get five points for writing your name at the top of the exam." If you can't get this far, something else is broken and that needs to be addressed before any of your other tests can be meaningful.

Independent Storage Tests

The next pair of tests show basic storage and retrieval of values during construction:

def test_com_init_x_pos() -> None:
    """Verify the instantiation of the X coordinate."""
    some_com = example1.CenterOfMass2D(1.0, 1.0)
    assert some_com.pos_x_m == 1.0


def test_com_init_y_pos() -> None:
    """Verify the instantiation of the Y coordinate."""
    some_com = example1.CenterOfMass2D(1.0, 1.0)
    assert some_com.pos_y_m == 1.0

These are uninteresting by themselves, but demonstrate the concept of "one test, one assert". Much like the previous sanity test, these show basic functionality of the class. By having only one assert per test, we get better granularity in debugging and development. These tests still suffer the problem of lazy implementation 5, but that cannot be addressed by unit testing alone 6.

One thing to note is that these tests only verify behavior of one component at once. In this case, it's inconceivable that storage during construction wouldn't work simultaneously, but we can verify it:

1
2
3
4
def test_com_init_xy_pos() -> None:
    """Verify the instantiation of the X and Y coordinates."""
    some_com = example1.CenterOfMass2D(1.0, 1.0)
    assert some_com.pos_x_m == 1.0 and some_com.pos_y_m == 1.0

This test verifies different behavior from the previous two. Earlier, we verified the statements:

  • "If I set the X parameter to be 1.0, can I retrieve the same value?"

  • "If I set the Y parameter to be 1.0, can I retrieve the same value?"

whereas this one verifies:

  • "If I set X and Y both to be 1.0, are they both set to 1.0?"

This becomes a more useful tool if the system under test demonstrates side effects or optional keywords which break the independence assumption. Even the attrs library - used for data structures - can break independence if something like __attrs_post_init__ is used.

When to use negative testing independent/separable behavior in test coding to a spec tests as documentation When and how to unit test When not to test How to do pytest tests Examples in our code

1

User Interface/User Experience, or when some human interacts with your system

2

If this type of coding is something that appeals for whatever reason, check out "contract programming" in general or the PyContracts library. I'd suggest looking at PyContracts if you're working with numerical computing routines to avoid infuriating float32 vs float64-style problems.

3

For a trivial example, consider single event upsets (SEU), which occur when a high-energy particle does something bad to your computing hardware.

4

This can be tricky to pin down. For example, a data structure library like attrs shouldn't be considered a system, but something like Pykka's actor registry should. I plan on going more into this later.

5

By lazy implementation I mean the problem where programmers sometimes implement a method like add2(3, 3) by writing something like add2 = lambda _, _: 6, which sidesteps the point. These tests are vulnerable to this approach.

6

Technically it can, but there are methods which don't entail writing hundreds of unit tests.

ECS Testing Intro

I like to teach, write and explain things. While I find many fields interesting and worth exploring, one that I think I have some strength in is in software/system test. I want to provide a resource for those who are getting into system design through testing principles and I'd like to do it in a practical way.

One topic that has caught my interest of late is the "Entity Component System" (ECS) approach to game design and world processing. As a co-organizer of PySprings, I sometimes give talks or work on projects there. A recent toy I helped develop can be found here if you'd rather just jump in and play with a low-quality prototype. However, there's much more well developed code if you can stick through to the end.

In the interest of brevity, this article will just present an introduction to ECS and a basic test for a simple instance of it.

As a disclaimer before we get too much farther, this article assumes some knowledge of Python and general programming. In the future I might try and organize better (with curated curriculum), but that goes above and beyond the "half-assed" principle mentioned in a previous post.

ECS?

What is this ECS thing and why do I care? The wiki article does better justice to it than I can. If you want more depth, head there before proceeding.

TL;DR - ECS is a way of structuring the logic of your code to make it easier to scale and was adopted by some C++ game engines. It's got three bits:

  • An "entity". Think of this as a database ID 1 and nothing else. It's just an integer.

  • A "component". If you know Rust, this is kinda like a trait, which can be applied to an entity. More on this later, as it's a bit confusing at first.

  • A "system". This is the cool part - it's a way of permuting each entity with a given component per "tick".

We're going to get to frameworks and libraries later, but to give a hands-on feel, here's an example component:

@attr.s
class CenterOfMass2D:
    """
    Represent an object's center of mass in 2D Euclidean geometry.

    This class is used to represent the center of a rigid object, rather than
    any edges or bounding-box artifacts that may come from drawing or other
    representations.

    .. Note:: All distances are in meters.

    :param pos_x_m: The "x" coordinate in meters
    :param pos_y_m: The "y" coordinate in meters
    """

    pos_x_m: float = attr.ib(validator=validators.instance_of(float))
    pos_y_m: float = attr.ib(validator=validators.instance_of(float))

Here I'm using the excellent attrs library to create a no-boilerplate data class to hold the information, but the general structure should be approachable: define a class that holds two parameters, x and y. These parameters are both enforced to be floats (and are represented using type hints, which we'll cover later).

A similar process can be used to create a component which describes an object's velocity:

@attr.s
class Velocity2D:
    """
    Represent an object's velocity in 2D Euclidean geometry.

    This class represents the "group velocity" of an object - that that a
    single entity would have if taken in aggregate.

    .. Note:: All velocities are in meters per second.

    :param vel_x_m_s: The "x" velocity in meters/second
    :param vel_y_m_s: The "y" velocity in meters/second
    """

    vel_x_m_s: float = attr.ib(validator=validators.instance_of(float))
    vel_y_m_s: float = attr.ib(validator=validators.instance_of(float))

These, when used together, allow an entity to have both a position and velocity.

I mentioned earlier that an entity was just an ID. How do we give an entity these components, then? First, we have to create a world and an entity to live in it:

world = esper.World()
some_entity = world.create_entity()

That's it! Now that something exists in the world, we can slap our components on it:

1
2
3
4
5
6
# Add the center of mass component to our entity, centered at 0, 0
world.add_component(some_entity, CenterOfMass2D(0.0, 0.0)

# Add the velocity component to the entity, with a "rightward" velocity of
# 1.0 m/s
world.add_component(some_entity, Velocity2D(1.0, 0.0))

Simple enough, I should wonder.

System...?

We've got an entity, some_entity, and some components - CenterOfMass2D and Velocity2D - so where is the "system"? At least in the framework that we're using here, this is handled by Processor classes.

In our case, we could update our center of mass position based on the current object's velocity. In its simplest form, neglecting gravity, aerodynamics and the rest (following the "spherical cow in a vacuum" approach), we would then have a system/processor that looks like:

# There is no available type stub for Processor, so mypy thinks it cannot be
# subclassed
class KinematicsProcessor(esper.Processor):  # type: ignore
    """
    Implement a simple stepwise approximation for displacement propagation.

    This processor will update all entities with centers of mass based on::

       -->    -->           -->
       x_1 =  x_0  +  dt  *  v

    where dt is a time quantity in seconds and passed in on creation.

    :param dt_sec: The time step to use for propagation, in seconds
    """

    def __init__(self, dt_sec: float):
        """Initialize the object."""
        # NOTE: The Esper documentation calls out the lack of need for calling
        # `super` for Processors
        self._dt_sec = dt_sec

    # This is compliant with the esper documentation; ignoring the lint
    def process(self) -> None:  # pylint: disable=arguments-differ
        """Execute linear interpolation for entities with centers of mass."""
        for _, (com, vel) in self.world.get_components(
            CenterOfMass2D, Velocity2D
        ):
            com.pos_x_m += self._dt_sec * vel.vel_x_m_s
            com.pos_y_m += self._dt_sec * vel.vel_y_m_s

If we ignore the extra fluff for appeasing the static code analysis tools, this breaks down to a pair of calls.

com.pos_x_m += self._dt_sec * vel.vel_x_m_s
com.pos_y_m += self._dt_sec * vel.vel_y_m_s

These will update the center of mass position based on the entity's velocity. The details of how we get access to all of the entities are bound up in the implementation of the ECS framework, Esper. The take-away here is that the system will inspect every entity which has both the CenterOfMass2D and Velocity2D components and operate on them. It's as simple to add the processor to the world as it was to add the components:

1
2
3
4
5
# Set the time between ticks to be one second:
kinematics_proc = KinematicsProcessor(1.0)

# Add the processor to the world
world.add_processor(kinematics_proc)

Next Steps

Now that we've got all the fundamentals - an entity, two components and a system - we can get around to that testing thing that was the point of this whole article. How do we test something like this?

This post is running long, so we'll only do one test for now. It'll be in the unit test format, but resembles an integration test much more than a pure unit test ought to - more on that in a later article. Purity will have to wait; we just want to get our little system to go.

The most basic test I can think of is one which verifies that our component's position updates correctly if one tick is applied. In the code we wrote above, after one tick, our position should be (1.0, 0.0). My reasoning is this:

  1. (Given) Our initial position is (0.0 m, 0.0 m)

  2. (Given) Our initial velocity is (+1.0 m/s, 0.0 m/s)

  3. (Given) Our time step between updates is 1.0 s

  4. If we call the "update world" method once, we should only update our world once

  5. One update gives 1.0 m/s * 1s + 0.0 m for our X position, resulting in a new X position of 1.0 m

  6. Our velocity and position both in the Y axis are zero, so there should be no changes there.

We're not going to test (6.) now, but we should be able to assert that the new position in X is 1.0 m, based on the above chain of logic. One way to write this, using the pytest framework, would be like:

def test_one_step_x() -> None:
    """Test that a single propagation step is successful in X."""
    # Create a world
    world = esper.World()
    some_entity = world.create_entity()

    # Create an object at the origin
    world.add_component(some_entity, CenterOfMass2D(0.0, 0.0))

    # Create an object with a velocity of 1 m/s "to the right"
    world.add_component(some_entity, Velocity2D(1.0, 0.0))

    # For this test, set the time between ticks to be one second:
    kinematics_proc = KinematicsProcessor(1.0)

    # Add the processor to the world
    world.add_processor(kinematics_proc)

    # Execute one tick:
    world.process()

    # Make sure the difference in position is correct. Using the dreaded "=="
    # for floats.
    com_pos = world.component_for_entity(some_entity, CenterOfMass2D)

    assert com_pos.pos_x_m == 1.0

Most of the steps are stolen from the previous snippets: set up the world, create the entity, create the components, create the processor. The new code is about running the update (world.process()), getting the center of mass component (world.component_for_entity...) and finally the assertion.

If you're following along and all goes well, you should be able to run py.test and get the always-satisfying green bar with 1 passed in N.NNs.

Conclusion

In this article, I introduced ECS, showed some implementation and wrote a test for it. I didn't cover any of the environment setup, adding dependencies or anything to that end - these are all topics which justify their own time in the sun. That said, I don't mean to leave you hanging. The full code for this example can be found on the accompanying github repo in ecs_testing_intro/example1.py and tests/test_example_1.py.

1

Identifier, usually an auto-incrementing integer that's unique for a row

First

It's 2019, almost 2020. I started this site a long time ago, with a lot of inspiration but less motivation to write than I had expected. It's a long time later and I want to get back to it.

I intend for this to be a combination of a research log, accountability nag and bucket to store information in. If it helps my writing and publishing regularity, all the better.

I believe in open publishing and work where possible. To that, I plan on working on getting my github up to speed. While it's not evident from there, I do write code on a regular basis and it would be good to share.

The overriding theme of this blog-cum-repository will be "anything worth doing is worth half-assing to start." Ergo this post.