Page 2 of 3

E4X Macro for Haxe

I recently needed to navigate some XML in Haxe and noticed that there were few options for doing this quickly and easily in Haxe.

I did notice Oleg’s Walker class which brings some of the E4X functionality of AS3 to Haxe.
While the resulting code was more elegant than hand-writing loops and tests, it still felt too verbose, and I decided to add some macro sugar to it to cut down the syntax (and bring it closer in line with the E4X spec).

The result is the E4X class, which reduces the amount of code by 2-3 times (in comparison to a fully runtime, function-based solution). Due to haxe language restrictions, the resulting syntax is not quite as compact as the AS3 equivalent, but it’s close.

Usage

E4X expressions must be wrapped in the macro call, and they return an iterator of values (the type of which is based on the last part of the expression).

To get all children:

1
var nodes:Iterator<Xml> = E4X.x(xml.child());

Here are some different ways to get a list of all the child nodes with the name “node”:

1
2
3
4
5
6
7
8
9
10
11
var xml:Xml;
var nodes:Iterator<Xml> = E4X.x(xml.node);

// or (for example)
var nodes:Iterator<Xml> = E4X.x(xml.child("node"));

// or (using an expression which will be wrapped in a function call)
var nodes:Iterator<Xml> = E4X.x(xml.child(nodeName=="node"));

// all of which are shortcuts for this filter expression
var nodes:Iterator<Xml> = E4X.x(xml.child(function(xml:Xml, _i:Int):Bool{return xml.nodeName=="node";}));

To get the text of a node use the text() method:

1
var nodes:Iterator<String> = E4X.x(xml.text());

To access descendants, use the “desc()” method, or the underscore shortcut:

1
2
3
4
5
var nodes:Iterator<Xml> = E4X.x(xml.desc());
// or
var nodes:Iterator<Xml> = E4X.x(xml._());
// or just
var nodes:Iterator<Xml> = E4X.x(xml._);

Here are some different ways to get a list of all the descendant nodes with the name “node”:

1
2
3
4
5
6
7
8
9
10
11
var xml:Xml;
var nodes:Iterator<Xml> = E4X.x(xml._("node"));

// or (for example)
var nodes:Iterator<Xml> = E4X.x(xml.desc("node"));

// or (using an expression which will be wrapped in a function call)
var nodes:Iterator<Xml> = E4X.x(xml.desc(nodeName=="node"));

// all of which are shortcuts for this filter expression
var nodes:Iterator<Xml> = E4X.x(xml.desc(function(xml:Xml):Bool{return xml.nodeName=="node";}));

Getting a list of descendants that have an “id” attribute would be done like this (the a(“id”) call acts like a filter):

1
2
3
4
5
6
7
var nodes:Iterator<Xml> = E4X.x(xml._(a("id")));

// which could also be written as
var nodes:Iterator<Xml> = E4X.x(xml._(a(attName=="id")));

// both of which will be expanded to
var nodes:Iterator<Xml> = E4X.x(xml._(a(function(attName:String, attValue:String, xml:Xml):Bool{return attName=="id";})));

Whereas, if you wanted to get the “id” attributes themselves, you could do this:

1
var nodes:Iterator<Hash<String>> = E4X.x(xml._.a("id"));

To get all of the ancestors of any nodes with an “id” attribute equal to “test”, you could do this:

1
2
3
var nodes:Iterator<Xml> = E4X.x(xml._(a("id")=="test").ances());
// or (a little less legible, but will perform slightly better)
var nodes:Iterator<Xml> = E4X.x(xml._(a(attName=="id" &amp;&amp; attValue=="test")).ances());

 

Comparison with AS3 E4X

Getting children with a specific node name (i.e. “node”)
AS3 E4X xmlRoot.node
Haxe E4X xmlRoot.node
Getting descendants with a specific node name (i.e. “node”)
AS3 E4X xmlRoot..node
Haxe E4X xmlRoot._(“node”)
Getting an attribute
AS3 E4X xmlRoot.@id
Haxe E4X xmlRoot.a(“id”)
Getting all descendants with a “id” attribute
AS3 E4X xmlRoot..(@id.length())
Haxe E4X xmlRoot._.(a(“id”))

Note that all of these examples should be wrapped in the E4X.x() call, as in the code snippets above.

 

Performance

I also ran some performance tests for several targets (and the equivalent tests in AS3 E4X), the results of which are below.
This helped me make some performance improvements to Oleg’s original code, and I managed to squeeze an extra 25-30% increase in performance out of it.

Surprisinigly, the JS target seems to perform best overall (although this is probably more as a result of Chrome’s JS engine).
Even after my improvements, the AS3 target was woefully slow in comparison to it’s native counterpart, although all of the other targets seemed to hold their own, with more complex expressions becoming faster than the AS3 E4X equivalent (if anyone knows why this performs so poorly, let me know).

AS3 E4X Hx > Flash Hx > JS Hx > C++ Hx > Neko
Get Children 0.00 0.21 0.03 0.22 0.03
Get Children With Attrib 0.08 1.02 0.24 0.10 0.31
Get Descendants 0.92 7.20 0.52 0.57 0.94
Get Descendant Text 1.48 19.40 1.97 1.83 3.22
Get Descendants by Name 2.33 11.05 0.24 0.51 1.42
Get Descendants with matched Attrib. 2.70 30.10 1.20 2.23 7.51
Measurements are in seconds per 1000 calls
JS tests done in Chrome 24 Win64

If anyone has any idea how to use the @ symbol in method names in haxe (without the compiler complaining), let me know and I’ll make attribute accessors match the spec.

I will be releasing this code as part of an upcoming haxe library called “xml-tools”, but until then, feel free to download the E4X class here.

Shout out if have any issues.

Fitting Text into a Box

On a recent job I was tasked with creating a visually elegant replacement for an image out of some text (when an image was unavailable).
I decided to adjust the font-size of each line of text in a block to fill a box.

The result is the TextBoxTest class.
To use it, you create a text field, set it’s properties (text/size/multiline etc), then send it to the TextFitter class like this:

1
2
3
4
5
var field:TextField = new TextField();
field.text = "THIS IS SOME DEMO TEXT.";
field.width = field.height = 50;
field.multiline = field.wordWrap = true;
TextFitter.fit(field);

The second parameter is ‘defaultSize’, this gives the TextFitter a starting point when resizing the text and does affect the end result. If ommitted, the font-size from the ‘defaultTextFormat’ will be used.

If wordWrap is set to false, you’ll have to manually add line-breaks. In this mode lines of text will simply be resized until they’re the same width as the field itself (the height of the field will be ignored).

If wordWrap is set to true, the height of the field will be taken into account, and the text will be reduced in size until it all fits within the field. If ‘defaultSize’ is ommitted and ‘wordWrap’ is true, TextFitter will first attempt to maximise defaultSize so that it always fills the field (even if the defaultTextFormat.size property wouldn’t ordinarily fill the field).

It’s worth noting that the code uses multiple while loops and while it has internal limits on iterations, I’d recommend using it up once and then generating images from the output.

Click flash to focus, Orange circles resize boxes

Download the demo source here

Haxe Lazy Instantiation Macro

After finally settling on msignals as the event system used in my libraries, I turned my attention to cutting down the amount of code required to use it.

To minimise the memory overhead (and dispatching performance) of signals, I prefer to use lazy instantiation. But this can make the implementation a bit verbose.

Hence my lazy instantiation macro, it only cuts out 3 lines per signal, but makes the code considerably more legible.

Continue reading

Space Junkie update

In my spare time I’ve been working on a game for mobile.
It’s based on Haxe/NME and Nape Physics.

In this demo you can also catch a tiny glimpse of my upcoming GUI library (using the Chutzpah skin by Morgan Allan Knutson).

Get Adobe Flash player

Click flash to focus, Arrows to move

Generating Docs for Github Wiki from Haxe code

Following some pretty good feedback on Composure, the composition library for Haxe, I decided to get some code documentation published.

The result was a batch file which would generate documentation in Markdown format, which can then be manually committed and pushed to the github wiki. You can check out some examples of the results here, here & here.

The Documentation system

I wanted to have something generated directly from the code, and I had a preference for having it hosted within the Github Wiki system (for the simplicity of having code and docs accessible from the same place).

When a repository is created in Github, the system automatically creates a second repository to store the wiki files, which can be edited via the Github web interface or by cloning this wiki repository onto your local hard-drive and manually editing the files, which are in Markdown format, a simplified formatting language which gets converted to html by the github back-end.

I’d need to generate Markdown from my code and have it placed into wiki repository. I only found a few documentation systems which processed haxe code, and only one of them which allowed for custom templates, this was ChxDoc.

There were a few limitations to ChxDoc, specifically, that you have no control over what files get generated, or what file type it spits out. I’d have to reorganise and rename the ouput files as part of my batch file. ChxDoc works by processing an xml representation of the code, which is created by running the haxe compiler with the -xml flag, I’d include this step in my batch file as well.

The Templates

I copied the default ChxDoc template and stripped all of the html tags out then added in the Markdown syntax. I didn’t need several of the output html files, so some of the template files remained untouched (deleting them caused ChxDoc to fail).

ChxDoc (and my template) supports the following tags:

1
2
3
4
5
6
7
8
9
10
@author
@deprecated
@param
@private
@requires
@return (or @returns)
@see
@throws
@todo
@type

If you just want the template files, you can grab them here.

Setting up the Repositories

To streamline the documentation process, I added the wiki repository as a submodule to the main repository, this means that the wiki files would always sit in the same relative position to the main source code (i.e. in a ‘github-wiki’ folder).

In the ‘build’ directory, I created a batch file which does the following:

  • Generates the XML representation of the code using the haxe compiler.
  • Deletes the old documentation folder.
  • Regenerates the documentation using the chxdoc program, my templates and the xml code graph.
  • Deletes irrelevant generated files.
  • Renames the ‘All Classes’ file (changing the file type from html to md).
  • Changes the remaining html files to md files.

The batch file, template and ChxDoc are all included in the Composure repository (don’t tell me that binary files don’t belong in a repository you nazis) or you can just check out the batch file here (which will obviously only function on Windows).

Edit: 06/02/2013

I have updated the templates for the latest ChxDoc version (1.2.0).

Introducing Composure for Haxe (with Dependency Injection)

Over the years I have realised that inheritance is massively overused by developers, and that by replacing it with a solid composition design, a lot of code becomes a lot more reusable.

Some languages and platforms have native support for composition (e.g. Unity3D), but for the languages I use there was nothing, so about two years ago I built a lightweight composition framework for AS3 called Composure, I’ve recently completely rebuilt it for Haxe, utilising Haxe’s awesome typing and macro systems to make this small library really powerful.
Continue reading

Internet Archive Android App

I’ve just released a new Android App.
It allows you to watch out-of-copyright videos from the Internet Archive Database on your phone or tablet.
It’s currently early days and it doesn’t really have any browse functionality yet, just search fields.

An iOS version will be coming soon also.

Evolvex Furniture Builder

In this flash app I built for Evolvex, users can assemble furniture from different components in a 3D environment. When finished the furniture can be purchased, all of the components, along with a diagram of the furniture gets sent to the user.
Continue reading

SWC packaging ANT Task for Flash Builder

Download the SWC Packager ANT task here (source included)

Often, my project workflows include checkouts of other remote code repositories. This means I can directly edit the code and have it immediately compiled into my project, without having to compile an SWC and copy it into my project. This can lead to problems when the project needs to be rolled back to a previous revision (there is no easy way of knowing which revision the remote repositories shuold be checked out at), and issues with remote repositories that are moved/removed.

In the past I have used the SVN externals to achieve this (including using the ‘-r’ option to pin externals to specific revisions), but this solution still didn’t get around repositories that are moved/removed.

I ended up writing an ANT task which can build an SWC based on a Flash Builder project file. This way I could integrate regular SWC creation into my deployment process, backing up all external code.
The ANT task also has the option to export a manifest XML file detailing all of the classes included in the SWC.

These are the compilation arguments taken from the project file:

  • All classpaths & SWC paths
  • Accessibility setting
  • Target flash player

These are the additional options:

  • sdk – a path to the sdk folder. This is used to find the frameworks directory and the compiler.
  • projectPath – a path to the root folder of the project.
  • sourceExceptions (optional) – a comma separated list of source paths to exclude from the SWC (they will still be compiled against, just not included)
  • linkReport (optional) – a path to a link report XML file (as output by the mxmlc compiler), this can be used to specify a list of classes to include. By including the link-report argument in your main project then passing this path to the SWC Packager your SWC will only include classes currently used in your project.
  • includeMainSource (optional) – a boolean specifying whether the main source path should be included in the SWC. Defaults to false.
  • manifestOutput (optional) – a path specifying where the manifest XML file should be saved.
  • computeDigest (optional) – a boolean specifying whether a catalog.xml file should be generated (for use by RSL). Defaults to false.
  • outputAsDirectory (optional) – a boolean specifying whether the compiler should output a folder with AS files instead of an SWC file. Defaults to false.
  • additionalArgs (optional) – additional compiler arguments can be passed through here.
  • configXML (optional) – a string pointing to an additional config XML file.
  • compileDebug (optional) – a boolean specifying whether the debug compiler option should be used. Defaults to false.
  • verbose (optional) – setting this to true will make the SWC Packager print out the full compiler command before executing it. Defaults to false.

When includeMainSource is set to true and a link-report is specified, only classes that meet both conditions will be included.

Here is an example of how the task can be used in your ANT script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<project name="SWC Packager" basedir="../">
    <property name="FLEX_HOME" value="C:\FlexSDKs\4.5.0" />

    <taskdef name="swcPackager" classpath="${basedir}\build\SWCPackager.jar" classname="org.farmcode.antTasks.SWCPackager"/>
   
    <target name="Package All Source" description="Builds all AS code into an SWC.">
        <swcPackager sdk="${FLEX_HOME}" projectPath="${basedir}"
                includeMainSource="true" swcOutput="${basedir}/build/allCode.swc" configXML="config.xml"/>
    </target>
   
    <target name="Package All Third-party Source" description="Builds all AS code except main src dir into an SWC.">
        <swcPackager sdk="${FLEX_HOME}" projectPath="${basedir}" includeMainSource="false"
                includeMainSource="true" swcOutput="${basedir}/build/thirdPartyCode.swc" configXML="config.xml"/>
    </target>
   
    <target name="Package All Referenced Source" description="Builds all referenced AS code except main src dir into an SWC.">
        <swcPackager sdk="${FLEX_HOME}" projectPath="${basedir}" linkReport="${basedir}/build/linkReport.xml"
                includeMainSource="true" swcOutput="${basedir}/build/allReferencedCode.swc" configXML="config.xml"/>
    </target>
</project>

Download the SWC Packager ANT task here (source included)

Nokia Bill Exchange

This is a microsite I built for Nokia with the great group at JWT.

I built the physics portion of the site, all of the items and navigation moving around in the background using Box2D. I also built several of the pages.


Continue reading

The Bad Eggs

I’ve been doing some work down at JWT recently and found myself building this silly physics prototype.
You can add eggs and then smash them against one another or the walls.

Get Adobe Flash player

A more general CSS

As a language, I like CSS, it has a simple elegance that achieves it’s humble goals very nicely.
It’s these humble goals that bother me.

HTML is an XML based language, and fundamentally, all CSS is doing is targeting nodes within this XML and modifying their attributes. Of course, it’s not possible to, for example, modify the href property in an anchor tag, and this shows that the implementation of CSS is stifled by it’s modest goals.

Also, CSS’ ever-growing list of selectors shows a fundamental inflexibility in it’s syntax. Selectors basically target specific nodes within your HTML, this sounds to me an awful lot like the job for XPath.

Imagine a CSS where XPath statements replace selectors and a generalised set of property modifiers replaced style declarations.
In this way you could (for example) set all the target attributes within all anchor tags to "_self".

1
2
3
//a {
    target:set("_self");
}

Or you could remove all the width and height attributes from img tags that don’t have a src attribute.

1
2
3
4
//img[not(@src)] {
    width:remove();
    height:remove();
}

Or you could add some copyright info to the alt attribute of any img tags within any div tags with class="portfolio".

1
2
3
//div[@src="portfolio"]//img {
    alt:append(" © 2010");
}

The biggest issue I can see arising from this is that styles on HTML elements are not individual attributes but a collection of declarations compiled into a single attribute.
To get around this I have a few solutions.
Firstly we could create a Regular Expression based string modifier which gets executed on the specified attribute of the targeted nodes (in this case the ‘style’ node), something like this (here I’m setting line-height=20px; on all p tags).

1
2
3
//p{
    alt:mod("line-height\:.*;", "line-height:20px");
}

I think you can probably see that this is not ideal though, it is very verbose, difficult to read and would make cascading declarations a nightmare for the browsers reading this code.

The ideal system (IMO) would unfortunately require a modification to the HTML spec, it would require styles to be broken out into their own child node, for example:

1
2
3
4
5
6
<p>
    <style display="block" float="left"/>
</p>
<a>
    <style color="#00ff00"  color.hover="#00ffff"/>
</a>

And then, to set our style would be a normal attribute setting modifier.

1
2
3
//p.style {
    line-height:set("20px");
}

Alternatively, we could create a style specific attribute modifier. I am reluctant to vouch for this because it is a special case but it could be the more feasible (as it doesn’t require changes to the HTML spec). It’d look something like this:

1
2
3
//p{
    style:setStyle("line-height", "20px");
}

You can see that vastly broadens the scope of CSS to the point that Cascading Style Sheets is no longer an appropriate title (maybe XMod or something is better).
Here are several situations I can think of where such a system could be useful (outside of styling). Remember, none of these permanently affect the underlying XML, they’re more of a filter through which XML can be viewed.

  • Applying formatting to Word documents using the XDOC file format.
  • Creating a mobile friendly version of flash files using the XFL file format (i.e. removing filter effects, removing embedded fonts, etc).
  • Modifying the Firefox UI using the XUL interface files (as if we need another way to customise Firefox).

Environmental Controls

Grant Skinner, the renowned ActionScript developer, has recently been playing with applications that span both desktop computers and Android based smart-phones. An idea struck me that this dual-platform would be perfect for a environmental controlling system. Imagine, having installed a simple Remote Control app onto your phone, every time you walk into a hotspot certain controls can be sent onto your handset to give you control over certain systems around you.

Continue reading

The Robot’s Creole

When I first began programming I was appalled at the simplistic nature of the tools being used to create software; and whilst there have been admiral efforts in the past to make programming a more intuitive affair (think node-based programming) very few of these tools have lived on.

In my opinion, this is symptomatic of a bigger problem in the way we build software; namely, the strong tethering between programming language, compiler & delivery platform.

Imagine a system where, when you typed (in your preferred language), and behind-the-scenes, your IDE was converting all of your code into an XML based representation of the language structures you are typing. These XML representations are what gets saved in your files, NOT your ‘human-readable’ programming language.

This effectively turns the code you view in your IDE into a ‘rendering’ of the core XML structures in the files.

Consider these benefits:

  • Formatting preferences (like the eternal cuddling brackets debate) would be stored in your IDE, allowing everyone to view the code however they want.
  • Programmers are not restricted to platforms or compilers. For example, anyone could happily write a Flash application in C#, because both languages would be saved into the same core XML structure.
  • Pre-compilers and IDE tools would be completely cross-language. Consider having refactoring tools that work across all languages, or code documentation tools that worked across all languages.
  • New ways of rendering code. Consider being able to have a Colour picker next to colour values, or a popout calendar next to date/time values.
  • Alternate views of code (e.g. Visual layouts for forms, Class diagrams of entire program, UML diagrams) would all become first class citizens and would be language agnostic.
  • Breaks down relationship between files and classes. You could have multiple packages/classes in one file, or alternatively you could split classes into multiple files (e.g. a function in each file). This would not change the way you edit the code within the IDE. This could make for easier file handling and version control.
  • Compilers and Pre-compilers would be faster in two ways; Firstly, they’d only have to parse XML instead of processor intensive human-readable languages; Secondly, this parsing need only happen once, then all compilers/pre-compilers could use the in memory standard data structures.

Arbitrary Formatting

When discussing this with people, I have found the biggest concern is to do with arbitrary formatting of code, i.e. occasionally it is beneficial to break ones preferred formatting rules to make code more readable. This could easily be solved by the IDE detecting whenever you’ve broken your own preferred formatting rules (as defined in the IDE preferences) and insert a ‘formatting’ tag into the XML structure, describing the custom formatting. Of course this formatting difference will only be applicable within certain languages, and so these ‘formatting’ tag would also be accompanied by information describing which language (or group of languages) the tag applies to. Alternatively this information could be stored in a separate file, keeping the code free of arbitrary data (and potentially making these formatting choices specific to the programmer who made them).

The Challenges

There are many challenges that such an idea faces, these are the initial ones that spring to mind.

Language Types & Paradigms

Which languages and structures should be able to be represented within this core XML language?

There are vast differences in the structures and paradigms of (for example) Functional Languages and OOP Languages, and much thought must be put into how similar these core items are. For example, the statelessness of Functional programming means that it may not contain non-local variables, this doesn’t mean that it is incompatible with this system, only that if you use local variables in your program you will not be able to use Functional Compilers.

Compiler restrictions on language features

How will your IDE know whether the language features used in your program are available to the compilers you wish to target?

Consider you are writing your application in C# but intend to compile to JavaScript, you use getters and setters in your code (which are not a native feature of JavaScript). There needs to be some way for your IDE to detect that you’re using invalid code for the JavaScript compiler.

One solution would be to create a standardised way for compilers to declare which language features they support. The programmer would target the intended compilers in the project settings, then the IDE could report on invalid uses.

Interaction between Strong and Loose typing

Writing code in a completely loosely typed language (like JavaScript) would not provide enough information for some strongly typed compilers; whilst it would be possible to use a solution like the compiler declaration mentioned above, I believe there is a better solution to this.

Currently, in most OOP languages (Java, C#, AS3, etc) coders use specifically formatted comments to represent information about certain classes/members, which is then used to generate documentation. I believe a similar system could be used to add/display type information in loosely typed languages where adding this information outside of comments would break the language spec. It would be the language parser’s job to analyse these comments and put them into the correct XML data structures (and likewise I believe documentation information should be stored in standard XML structures, not in arbitrary comments).

Using native APIs and existing compiled libraries

All programs use native APIs and existing compiled libraries, whether it’s simply for base mathematics or multi-dimensional matrix operations (think MATLAB). These APIs need to be referenced in a manner that doesn’t tie your code to a specific Compiler/Platform.

One solution could be to develop a set of standard API interfaces. Then existing APIs would be packaged up to contain which of the API interfaces it supports and how those APIs get mapped to the internal classes and members. This would allow native APIs and external libraries to be interchangable, and eventually compilers could easily support features that are native to other platforms without modifying the target platform.

Russell Investment Calculator

Here is an application I built for Russell Investment during my time at the Farm.
It allows customers to work out the best way they can distribute their contributions to maximise their returns.
It presented some interesting challenges like building an efficient Data-Grid, and formatting text as it is typed by users (I got to finally use a diff formula).

Check out the app here.

Bonds Hipsters

While at the Farm, I built this site for Bonds, leading one other developer.
It connects with your Facebook account and plays a montage of your profile pics along with the Hipsters TVC.

Warner Videos

Here is a site I built during my time at the Farm, leading with another developer.
Unfortunately, we (the Farm) didn’t get the opportunity to design it as they wanted it to match a design from the States.
That said, we did use it as an opportunity to finish building visual layout and data-mapping library, which means that based on the XML data coming from the back-end, the entire site can be re-laid out.

Check it out here

Wiggle Time

I worked for about 9 months solid on this Virtual World for the Wiggles during my time at the Farm.
I was heavily involved right from the conceptual stage, which I think really helped in creating a really great looking and technically accomplished result.
In the end I lead a team of about 6 developers; 3 concurrently.

Children would move their customised avatar around in the Big Red Car; traversing a richly illustrated, parallax-based world. They could enter the houses of the Wiggles & Co. and had their own house in the world where all of their prizes were stored. To earn these toys, children played mini-games around the world, helping the characters achieve certain goals. It also included a TV to watch Wiggles video content.

We build a lot of interesting tools for this project including a bezier library, a small AI library and a parallax library. It also integrated with existing open-source libraries like the Box2d physics engine, goASAP tweening library and the flint particle engine.

Open up to Mail

This is a microsite I built for MercerBell/Australia Post in my time at the Farm.

It uses Papervision to move around the room, it took us a long time to get the room looking right, but I think it’s still quite a nice little execution.


Continue reading

Sunbeam Coffee School

I lead a team of three on this site for Sunbeam during my time at the Farm.
I’m still very happy with the result, especially the navigation which still looks really great.

Continue reading

© 2017 Thomas Byrne

Theme by Anders NorenUp ↑