Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 07 2018

14:30

Doing It Right examples on autotools, qmake, cmake and meson

About

I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.

The DIR examples are examples for various build environments on how to create a good project structure that will build libraries that are versioned with libtool or have versioning that is equivalent to what libtool would deliver, have a pkg-config file and have a so called API version in the library’s name.

What is right?

Information on this can be found in the autotools mythbuster docs, the libtool docs on versioning and freeBSD’s chapter on shared libraries. I tried to ensure that what is written here works with all of the build environments in the examples.

libpackage-4.3.so.2.1.0, what is what?

You’ll notice that a library called ‘package’ will in your LIBDIR often be called something like libpackage-4.3.so.2.1.0

We call the 4.3 part the APIVERSION, and the 2.1.0 part the VERSION (the ABI version).

I will explain these examples using semantic versioning as APIVERSION and either libtool’s current:revision:age or a semantic versioning alternative as field for VERSION (like in FreeBSD and for build environments where compatibility with libtool’s -version-info feature ain’t a requirement).

Noting that with libtool’s -version-info feature the values that you fill in for current, age and revision will not necessarily be identical to what ends up as suffix of the soname in LIBDIR. The formula to form the filename’s suffix is, for libtool, “(current – age).age.revision”. This means that for soname libpackage-APIVERSION.so.2.1.0, you would need current=3, revision=0 and age=1.

The VERSION part

In case you want compatibility with or use libtool’s -version-info feature, the document libtool/version.html on autotools.io states:

The rules of thumb, when dealing with these values are:

  • Increase the current value whenever an interface has been added, removed or changed.
  • Always increase the revision value.
  • Increase the age value only if the changes made to the ABI are backward compatible.

The libtool’s -version-info feature‘s updating-version-info part of libtool’s docs states:

  1. Start with version information of ‘0:0:0’ for each libtool library.
  2. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster.
  3. If the library source code has changed at all since the last update, then increment revision (‘c:r:a’ becomes ‘c:r+1:a’).
  4. If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
  5. If any interfaces have been added since the last public release, then increment age.
  6. If any interfaces have been removed or changed since the last public release, then set age to 0.

When you don’t care about compatibility with libtool’s -version-info feature, then you can take the following simplified rules for VERSION:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

Examples when these simplified rules are or can be applicable is in build environments like cmake, meson and qmake. When you use autotools you will be using libtool and then they ain’t applicable.

The APIVERSION part

For the API version I will use the rules from semver.org. You can also use the semver rules for your package’s version:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

When you have an API, that API can change over time. You typically want to version those API changes so that the users of your library can adopt to newer versions of the API while at the same time other users still use older versions of your API. For this we can follow section 4.3. called “multiple libraries versions” of the autotools mythbuster documentation. It states:

In this situation, the best option is to append part of the library’s version information to the library’s name, which is exemplified by Glib’s libglib-2.0.so.0 > soname. To do so, the declaration in the Makefile.am has to be like this:

lib_LTLIBRARIES = libtest-1.0.la

libtest_1_0_la_LDFLAGS = -version-info 0:0:0

The pkg-config file

Many people use many build environments (autotools, qmake, cmake, meson, you name it). Nowadays almost all of those build environments support pkg-config out of the box. Both for generating the file as for consuming the file for getting information about dependencies.

I consider it a necessity to ship with a useful and correct pkg-config .pc file. The filename should be /usr/lib/pkgconfig/package-APIVERSION.pc for soname libpackage-APIVERSION.so.VERSION. In our example that means /usr/lib/pkgconfig/package-4.3.pc. We’d use the command pkg-config package-4.3 –cflags –libs, for example.

Examples are GLib’s pkg-config file, located at /usr/lib/pkgconfig/glib-2.0.pc

The include path

I consider it a necessity to ship API headers in a per API-version different location (like for example GLib’s, at /usr/include/glib-2.0). This means that your API version number must be part of the include-path.

For example using earlier mentioned API-version 4.3, /usr/include/package-4.3 for /usr/lib/libpackage-4.3.so(.2.1.0) having /usr/lib/pkg-config/package-4.3.pc

What will the linker typically link with?

The linker will for -lpackage-4.3 typically link with /usr/lib/libpackage-4.3.so.2 or with libpackage-APIVERSION.so.(current – age). Noting that the part that is calculated as (current – age) in this example is often, for example in cmake and meson, referred to as the SOVERSION. With SOVERSION the soname template in LIBDIR is libpackage-APIVERSION.so.SOVERSION.

What is wrong?

Not doing any versioning

Without versioning you can’t make any API or ABI changes that wont break all your users’ code in a way that could be managable for them. If you do decide not to do any versioning, then at least also don’t put anything behind the .so part of your so’s filename. That way, at least you wont break things in spectacular ways.

Coming up with your own versioning scheme

Knowing it better than the rest of the world will in spectacular ways make everything you do break with what the entire rest of the world does. You shouldn’t congratulate yourself with that. The only thing that can be said about it is that it probably makes little sense, and that others will probably start ignoring your work. Your mileage may vary. Keep in mind that without a correct SOVERSION, certain things will simply not work correct.

In case of libtool: using your package’s (semver) release numbering for current, revision, age

This is similarly wrong to ‘Coming up with your own versioning scheme’.

The Libtool documentation on updating version info is clear about this:

Never try to set the interface numbers so that they correspond to the release number of your package. This is an abuse that only fosters misunderstanding of the purpose of library versions.

This basically means that once you are using libtool, also use libtool’s versioning rules.

Refusing or forgetting to increase the current and/or SOVERSION on breaking ABI changes

The current part of the VERSION (current, revision and age) minus age, or, SOVERSION is/are the most significant field(s). The current and age are usually involved in forming the so called SOVERSION, which in turn is used by the linker to know with which ABI version to link. That makes it … damn important.

Some people think ‘all this is just too complicated for me’, ‘I will just refuse to do anything and always release using the same version numbers’. That goes spectacularly wrong whenever you made ABI incompatible changes. It’s similarly wrong to ‘Coming up with your own versioning scheme’.

That way, all programs that link with your shared library can after your shared library gets updated easily crash, can corrupt data and might or might not work.

By updating the current and age, or, SOVERSION you will basically trigger people who manage packages and their tooling to rebuild programs that link with your shared library. You actually want that the moment you made breaking ABI changes in a newer version of it.

When you don’t want to care about libtool’s -version-info feature, then there is also a set of more simple to follow rules. Those rules are for VERSION:

  • SOVERSION = Major version (with these simplified set of rules, no subtracting of current with age is needed)
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

What isn’t wrong?

Not using libtool (but nonetheless doing ABI versioning right)

GNU libtool was made to make certain things more easy. Nowadays many popular build environments also make things more easy. Meanwhile has GNU libtool been around for a long time. And its versioning rules, commonly known as the current:revision:age field as parameter for -verison-info, got widely adopted.

What GNU libtool did was, however, not really a standard. It’s is one interpretation of how to do it. And a rather complicated one, at that.

Please let it be crystal clear that not using libtool does not mean that you can do ABI versioning wrong. Because very often people seem to think that they can, and think they’ll still get out safely while doing ABI versioning completely wrong. This is not the case.

Not having a APIVERSION at all

It isn’t wrong not to have an APIVERSION in the soname. It however means that you promise to not ever break API. Because the moment you break API, you disallow your users to stay on the old API for a little longer. They might both have programs that use the old and that use the new API. Now what?

When you have an APIVERSION then you can allow the introduction of a new version of the API while simultaneously the old API remains available on a user’s system.

Using a different naming-scheme for APIVERSION

I used the MAJOR.MINOR version numbers from semver to form the APIVERSION. I did this because only the MAJOR and the MINOR are technically involved in API changes (unless you are doing semantic versioning wrong – in which case see ‘Coming up with your own versioning scheme’).

Some projects only use MAJOR. Examples are Qt which puts the MAJOR number behind the Qt part. For example libQt5Core.so.VERSION (so that’s “Qt” + MAJOR + Module). The GLib world, however, uses “g” + Module + “-” + MAJOR + “.0″ as they have releases like 2.2, 2.3, 2.4 that are all called libglib-2.0.so.VERSION. I guess they figured that maybe someday in their 2.x series, they could use that MINOR field?

DBus seems to be using a similar thing to GLib, but then without the MINOR suffix: libdbus-1.so.VERSION. For their GLib integration they also use it as libdbus-glib-1.so.VERSION.

Who is right, who is wrong? It doesn’t matter too much for your APIVERSION naming scheme. As long as there is a way to differentiate the API in a) the include path, b) the pkg-config filename and c) the library that will be linked with (the -l parameter during linking/compiling). Maybe someday a standard will be defined? Let’s hope so.

Differences in interpretation per platform

FreeBSD

FreeBSD’s Shared Libraries of Chapter 5. Source Tree Guidelines and Policies states:

The three principles of shared library building are:

  1. Start from 1.0
  2. If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
  3. If there is an incompatible change, bump major number

For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.

I think that when using libtool on a FreeBSD (when you use autotools), that the platform will provide a variant of libtool’s scripts that will convert earlier mentioned current, revision and age rules to FreeBSD’s. The same goes for the VERSION variable in cmake and qmake. Meaning that with those tree build environments, you can just use the rules for GNU libtool’s -version-info.

I could be wrong on this, but I did find mailing list E-mails from ~ 2011 stating that this SNAFU is dealt with. Besides, the *BSD porters otherwise know what to do and you could of course always ask them about it.

Note that FreeBSD’s rules are or seem to be compatible with the rules for VERSION when you don’t want to care about libtool’s -version-info compatibility. However, when you are porting from a libtoolized project, then of course you don’t want to let newer releases break against releases that have already happened.

Modern Linux distributions

Nowadays you sometimes see things like /usr/lib/$ARCH/libpackage-APIVERSION.so linking to /lib/$ARCH/libpackage-APIVERSION.so.VERSION. I have no idea how this mechanism works. I suppose this is being done by packagers of various Linux distributions? I also don’t know if there is a standard for this.

I will update the examples and this document the moment I know more and/or if upstream developers need to worry about it. I think that using GNUInstallDirs in cmake, for example, makes everything go right. I have not found much for this in qmake, meson seems to be doing this by default and in autotools you always use platform variables for such paths.

As usual, I hope standards will be made and that the build environment and packaging community gets to their senses and stops leaving this into the hands of developers. I especially think about qmake, which seems to not have much at all to state that standardized installation paths must be used (not even a proper way to define a prefix).

Questions that I can imagine already exist

Why is there there a difference between APIVERSION and VERSION?

The API version is the version of your programmable interfaces. This means the version of your header files (if your programming language has such header files), the version of your pkgconfig file, the version of your documentation. The API is what software developers need to utilize your library.

The ABI version can definitely be different and it is what programs that are compiled and installable need to utilize your library.

An API breaks when recompiling the program without any changes, that consumes a libpackage-4.3.so.2, is not going to succeed at compile time. The API got broken the moment any possible way package’s API was used, wont compile. Yes, any way. It means that a libpackage-5.0.so.0 should be started.

An ABI breaks when without recompiling the program, replacing a libpackage-4.3.so.2.1.0 with a libpackage-4.3.so.2.2.0 or a libpackage-4.3.so.2.1.1 (or later) as libpackage-4.3.so.2 is not going to succeed at runtime. For example because it would crash, or because the results would be wrong (in any way). It implies that libpackage-4.3.so.2 shouldn’t be overwritten, but libpackage-4.3.so.3 should be started.

For example when you change the parameter of a function in C to be a floating point from a integer (and/or the other way around), then that’s an ABI change but not neccesarily an API change.

What is this SOVERSION about?

In most projects that got ported from an environment that uses GNU libtool (for example autotools) to for example cmake or meson, and in the rare cases that they did anything at all in a qmake based project, I saw people converting the current, revision and age parameters that they passed to the -version-info option of libtool to a string concatenated together using (current – age), age, revision as VERSION, and (current – age) as SOVERSION.

I wanted to use the exact same rules for versioning for all these examples, including autotools and GNU libtool. When you don’t have to (or want to) care about libtool’s set of (for some people, needlessly complicated) -version-info rules, then it should be fine using just SOVERSION and VERSION using these rules:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

I, however, also sometimes saw variations that are incomprehensible with little explanation and magic foo invented on the spot. Those variations are probably wrong.

In the example I made it so that in the root build file of the project you can change the numbers and calculation for the numbers. However. Do follow the rules for those correctly, as this versioning is about ABI compatibility. Doing this wrong can make things blow up in spectacular ways.

The examples

qmake in the qmake-example

Note that the VERSION variable must be filled in as “(current – age).age.revision” for qmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1)

To try this example out, go to the qmake-example directory and type

$ cd qmake-example
$ mkdir=_test
$ qmake PREFIX=$PWD/_test
$ make
$ make install

This should give you this:

$ find _test/
_test/
├── include
│   └── qmake-example-4.3
│       └── qmake-example.h
└── lib
    ├── libqmake-example-4.3.so -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.la
    └── pkgconfig
        └── qmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config qmake-example-4.3 --cflags
-I$PWD/_test/include/qmake-example-4.3
$ pkg-config qmake-example-4.3 --libs
-L$PWD/_test/lib -lqmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment).

$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ echo -en "#include <qmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config qmake-example-4.3 --libs --cflags`

You can see that it got linked to libqmake-example-4.3.so.2, where that 2 at the end is (current – age).

$ ldd test.o 
    linux-gate.so.1 (0xb77b0000)
    libqmake-example-4.3.so.2 => $PWD/_test/lib/libqmake-example-4.3.so.2 (0xb77a6000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75f5000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb759e000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb7580000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73c9000)
    /lib/ld-linux.so.2 (0xb77b2000)

cmake in the cmake-example

Note that the VERSION property on your library target must be filled in with “(current – age).age.revision” for cmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1. Note that in cmake you must also fill in the SOVERSION property as (current – age), so SOVERSION=2 when current=3 and age=1).

To try this example out, go to the cmake-example directory and do

$ cd cmake-example
$ mkdir _test
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=$PWD/_test
-- Configuring done
-- Generating done
-- Build files have been written to: .
$ make
[ 50%] Building CXX object src/libs/cmake-example/CMakeFiles/cmake-example.dir/cmake-example.cpp.o
[100%] Linking CXX shared library libcmake-example-4.3.so
[100%] Built target cmake-example
$ make install
[100%] Built target cmake-example
Install the project...
-- Install configuration: ""
-- Installing: $PWD/_test/lib/libcmake-example-4.3.so.2.1.0
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so.2
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so
-- Up-to-date: $PWD/_test/include/cmake-example-4.3/cmake-example.h
-- Up-to-date: $PWD/_test/lib/pkgconfig/cmake-example-4.3.pc

This should give you this:

$ tree _test/
_test/
├── include
│   └── cmake-example-4.3
│       └── cmake-example.h
└── lib
    ├── libcmake-example-4.3.so -> libcmake-example-4.3.so.2
    ├── libcmake-example-4.3.so.2 -> libcmake-example-4.3.so.2.1.0
    ├── libcmake-example-4.3.so.2.1.0
    └── pkgconfig
        └── cmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ pkg-config cmake-example-4.3 --cflags
-I$PWD/_test/include/cmake-example-4.3
$ pkg-config cmake-example-4.3 --libs
-L$PWD/_test/lib -lcmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <cmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config cmake-example-4.3 --libs --cflags`

You can see that it got linked to libcmake-example-4.3.so.2, where that 2 at the end is the SOVERSION. This is (current – age).

$ ldd test.o
    linux-gate.so.1 (0xb7729000)
    libcmake-example-4.3.so.2 => $PWD/_test/lib/libcmake-example-4.3.so.2 (0xb771f000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb756e000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb7517000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74f9000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7342000)
    /lib/ld-linux.so.2 (0xb772b000)

autotools in the autotools-example

Note that you pass -version-info current:revision:age directly with autotools. The libtool will translate that to (current – age).age.revision to form the so’s filename (to get 2.1.0 at the end, you need current=3, revision=0, age=1).

To try this example out, go to the autotools-example directory and do

$ cd autotools-example
$ mkdir _test
$ libtoolize
$ aclocal
$ autoheader
$ autoconf
$ automake --add-missing
$ ./configure --prefix=$PWD/_test
$ make
$ make install

This should give you this:

$ tree _test/
_test/
├── include
│   └── autotools-example-4.3
│       └── autotools-example.h
└── lib
    ├── libautotools-example-4.3.a
    ├── libautotools-example-4.3.la
    ├── libautotools-example-4.3.so -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2 -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2.1.0
    └── pkgconfig
        └── autotools-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config autotools-example-4.3 --cflags
-I$PWD/_test/include/autotools-example-4.3
$ pkg-config autotools-example-4.3 --libs
-L$PWD/_test/lib -lautotools-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <autotools-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ g++ -fPIC test.cpp -o test.o `pkg-config autotools-example-4.3 --libs --cflags`

You can see that it got linked to libautotools-example-4.3.so.2, where that 2 at the end is (current – age).

$ ldd test.o 
    linux-gate.so.1 (0xb778d000)
    libautotools-example-4.3.so.2 => $PWD/_test/lib/libautotools-example-4.3.so.2 (0xb7783000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75d2000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb757b000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb755d000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73a6000)
    /lib/ld-linux.so.2 (0xb778f000)

meson in the meson-example

Note that the version property on your library target must be filled in with “(current – age).age.revision” for meson (to get 2.1.0 at the end, you need version=2.1.0 when current=3, revision=0 and age=1. Note that in meson you must also fill in the soversion property as (current – age), so soversion=2 when current=3 and age=1).

To try this example out, go to the meson-example directory and do

$ cd meson-example
$ mkdir -p _build/_test
$ cd _build
$ meson .. --prefix=$PWD/_test
$ ninja
$ ninja install

This should give you this:

$ tree _test/
_test/
├── include
│   └── meson-example-4.3
│       └── meson-example.h
└── lib
    └── i386-linux-gnu
        ├── libmeson-example-4.3.so -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2 -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2.1.0
        └── pkgconfig
            └── meson-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I’m replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/i386-linux-gnu/pkgconfig
$ pkg-config meson-example-4.3 --cflags
-I$PWD/_test/include/meson-example-4.3
$ pkg-config meson-example-4.3 --libs
-L$PWD/_test/lib -lmeson-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <meson-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib/i386-linux-gnu
$ g++ -fPIC test.cpp -o test.o `pkg-config meson-example-4.3 --libs --cflags`

You can see that it got linked to libmeson-example-4.3.so.2, where that 2 at the end is the soversion. This is (current – age).

$ ldd test.o 
    linux-gate.so.1 (0xb772e000)
    libmeson-example-4.3.so.2 => $PWD/_test/lib/i386-linux-gnu/libmeson-example-4.3.so.2 (0xb7724000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb7573000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb751c000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74fe000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7347000)
    /lib/ld-linux.so.2 (0xb7730000)
0 Add to favourites0 Bury

July 11 2018

22:25

Doing it right, making libraries using popular build environments

Enough with the political posts!

Making libraries that are both API and libtool versioned with qmake, how do they do it?

I started a project on github that will collect what I will call “doing it right” project structures for various build environments.

With right I mean that the library will have a API version in its Library name, that the library will be libtoolized and that a pkg-config .pc file gets installed for it.

I have in mind, for example, autotools, cmake, meson, qmake and plain make. First example that I have finished is one for qmake.

Let’s get started working on a libqmake-example-3.2.so.3.2.1

We get the PREFIX, MAJOR_VERSION, MINOR_VERSION and PATCH_VERSION from a project-wide include

include(../../../qmake-example.pri)

We will use the standard lib template of qmake

TEMPLATE = lib

We need to set VERSION to a semver.org version for compile_libtool (and no, I don’t want to know about your self-invented versioning scheme)

VERSION = $${MAJOR_VERSION}"."$${MINOR_VERSION}"."$${PATCH_VERSION}

According section 4.3 of Autotools’ mythbusters we should have as target-name the API version in the library’s name

TARGET = qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

We will write a define in config.h for access to the semver.org version as a double quoted string

QMAKE_SUBSTITUTES += config.h.in

Our example happens to use QDebug, so we need QtCore here

QT = core

This is of course optional

CONFIG += c++14

We will be using libtool style libraries

CONFIG += compile_libtool
CONFIG += create_libtool

These will create a pkg-config .pc file for us

CONFIG += create_pc create_prl no_install_prl

Project sources

SOURCES = qmake-example.cpp

Project’s public and private headers

HEADERS = qmake-example.h

We will install the headers in a API specific include path

headers.path = $${PREFIX}/include/qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

Here put only the publicly installed headers

headers.files = $${HEADERS}

Here we will install the library to

target.path = $${PREFIX}/lib

This is the configuration for generating the pkg-config file

QMAKE_PKGCONFIG_NAME = $${TARGET}
QMAKE_PKGCONFIG_DESCRIPTION = An example that illustrates how to do it right with qmake
# This is our libdir
QMAKE_PKGCONFIG_LIBDIR = $$target.path
# This is where our API specific headers are
QMAKE_PKGCONFIG_INCDIR = $$headers.path
QMAKE_PKGCONFIG_DESTDIR = pkgconfig
QMAKE_PKGCONFIG_PREFIX = $${PREFIX}
QMAKE_PKGCONFIG_VERSION = $$VERSION

Installation targets (the pkg-config seems to install automatically)

INSTALLS += headers target

This will be the result after make-install

├── include
│   └── qmake-example-3.2
│       └── qmake-example.h
└── lib
    ├── libqmake-example-3.2.so -> libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.2.so.3 -> libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.2.so.3.2 -> libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.la
    └── pkgconfig
        └── qmake-example-3.pc

ps. Dear friends working at their own customers: when I visit your customer, I no longer want to see that you produced completely stupid wrong qmake based projects for them. Libtoolize it all, get an API version in your Library’s so-name and do distribute a pkg-config .pc file. That’s the very least to pass your exam. Also read this document (and stop pretending that you don’t need to know this when at the same time you charge them real money pretending that you know something about modern UNIX software development).

0 Add to favourites0 Bury

June 26 2018

12:32

Switching back to Firefox

One major grief for me when surfing on Android are ads. They not only increase page size and loading time, but also take away precious screen estate.

Unfortunately the native Android browser, which nowadays is Chrome, does not support extensions and hence there is no ad-blocker.

Therefore I was quite optimistic when Google announced they will be enforcing the betterads standards with Chrome – aka ad-blocking light.

However after having used Chrome only showing “betterads”, I must say that they are far away from what is tolerable to me.
I am more in line with the Acceptable Ads criteria. (My site also keeps to them – if you choose to disable ad-blocking here)

As someone who has to pay for hosting I fully understand that Ads are part of the game – but lets face it; as long as annoying ads get you more money, there will be annoying ads. Ad-blockers are a very effective way to let money speak here..

So I needed an adblock-capable browser on Android. Fortunately Mozilla greatly improved Firefox performance with their Quantum incentive. Or maybe modern Smartphones just got a lot faster. Anyway.. a recent Firefox virtually performs the same as Chrome on Android and thus is a viable alternative.

As of recently there is also Microsoft Edge for Android, but actually it does not gain an edge over anything. So lets stick with open source software.

With switching to Firefox on Android one should switch to Firefox on Desktop as well, so you get sync across devices.

On Linux

Unfortunately Firefox has bad default settings on Linux.
For one – unlike Chrome – it does not use client side decorations by default, and thus wastes space in the title bar. But this is easy to fix.

Then it still uses the slow software rendering path. To make it use the GPU, visit about:config and set the following properties to true

  • layers.acceleration.force-enabled enable OpenGL based compositing which for smooth scrolling. (enabled by default on OSX, Windows)
  • layers.omtp.enabled (OMTP) further improve performance when scrolling. (enabled by default on OSX, Windows)

They default to false due to issues on some obscure Mesa/ Xorg combinations, but generally work well for me with Nvidia drivers.

On Android

On Android Firefox generally has sane defaults. The only setting missing here to bring it on par with Chrome are the Encrypted Media Extensions. For this again visit about:config and create the following property and set it to true

  • media.eme.enabled

Still you will need some time to adapt to Firefox; e.g. there is no pull to refresh. However there are other bonus points besides adblocking; for me the synchronized tabs sidebar (on desktop) has proven to be an invaluable usability improvement.

0 Add to favourites0 Bury

April 29 2018

14:00

Teatime & Sensors Unity updated for Ubuntu 18.04

I updated my two little Apps; Teatime and Sensors Unity to integrate with Ubuntu 18.04 and consequently with Gnome 3.

For this I ported them to the GtkApplication API which makes sure they integrate into Unity7 as well as Gnome Shell. Additionally it ensures that only one instance of the App is active at the same time.

As Dash-to-Dock implements the Unity7 D-Bus API and snaps are available everywhere this drastically widens the target audience.

To make the projects themselves more accessible, I also moved development from launchpad to github where you can now easily create pull-requests and report issues.

Furthermore the translations are managed at POEditor, where you can help translating the apps to your language. Especially Sensors Unity could use some help.

0 Add to favourites0 Bury
14:00

Teatime & Sensors Unity updated for Ubuntu 18.04

I updated my two little Apps; Teatime and Sensors Unity to integrate with Ubuntu 18.04 and consequently with Gnome 3.

For this I ported them to the GtkApplication API which makes sure they integrate into Unity7 as well as Gnome Shell. Additionally it ensures that only one instance of the App is active at the same time.

As Dash-to-Dock implements the Unity7 D-Bus API and snaps are available everywhere this drastically widens the target audience.

To make the projects themselves more accessible, I also moved development from launchpad to github where you can now easily create pull-requests and report issues.

Furthermore the translations are managed at POEditor, where you can help translating the apps to your language. Especially Sensors Unity could use some help.

0 Add to favourites0 Bury
14:00

Teatime & Sensors Unity updated for Ubuntu 18.04

I updated my two little Apps; Teatime and Sensors Unity to integrate with Ubuntu 18.04 and consequently with Gnome 3.

For this I ported them to the GtkApplication API which makes sure they integrate into Unity7 as well as Gnome Shell. Additionally it ensures that only one instance of the App is active at the same time.

As Dash-to-Dock implements the Unity7 D-Bus API and snaps are available everywhere this drastically widens the target audience.

To make the projects themselves more accessible, I also moved development from launchpad to github where you can now easily create pull-requests and report issues.

Furthermore the translations are managed at POEditor, where you can help translating the apps to your language. Especially Sensors Unity could use some help.

0 Add to favourites0 Bury
14:00

Teatime & Sensors Unity updated for Ubuntu 18.04

I updated my two little Apps; Teatime and Sensors Unity to integrate with Ubuntu 18.04 and consequently with Gnome 3.

For this I ported them to the GtkApplication API which makes sure they integrate into Unity7 as well as Gnome Shell. Additionally it ensures that only one instance of the App is active at the same time.

As Dash-to-Dock implements the Unity7 D-Bus API and snaps are available everywhere this drastically widens the target audience.

To make the projects themselves more accessible, I also moved development from launchpad to github where you can now easily create pull-requests and report issues.

Furthermore the translations are managed at POEditor, where you can help translating the apps to your language. Especially Sensors Unity could use some help.

0 Add to favourites0 Bury

April 25 2018

07:20

April 19 2018

00:00

Managing a developer shell with Docker

When I’m not in Flowhub-land, I’m used to developing software in a quite customized command line based development environment. Like for many, the cornerstones of this for me are vim and tmux.

As customization increases, it becomes important to have a way to manage that and distribute it across the different computers. For years, I’ve used a dotfiles repository on GitHub together with GNU Stow for this.

However, this still means I have to install all the software and tools before I can have my environment up and running.

Using Docker

Docker is a tool for building and running software in a containerized fashion. Recently Tiago gave me the inspiration to use Docker not only for distributing production software, but also for actually running my development environment.

Taking ideas from his setup, I built upon my existing dotfiles and built a reusable developer shell container.

With this, I only need Docker installed on a machine, and then I’m two commands away from having my normal development environment:

$ docker volume create workstation
$ docker run -v ~/Projects:/projects -v workstation:/root -v ~/.ssh:/keys --name workstation --rm -it bergie/shell

Here’s how it looks in action:

Working on NoFlo inside Docker shell

Once I update my Docker setup (for example to install or upgrade some tool), I can get the latest version on a machine with:

$ docker pull bergie/shell

At least in theory this should give me a fully identical working environment regardless of the host machine. Linux VPS, a MacBook, or a Windows machine should all be able to run this. And soon, this should also work out of the box on Chromebooks.

Setting this up

The basics are pretty simple. I already had a repository for my dotfiles, so I only needed to write a Dockerfile to install and set up all my software.

To make things even easier, I configured Travis so that every time I push some change to the dotfiles repository, it will create and publish a new container image.

Further development ideas

So far this setup seems to work pretty well. However, here are some ideas for further improvements:

  • ARM build: Sometimes I need to work on Raspberry Pis. It might be nice to cross-compile an ARM version of the same setup
  • Key management: Currently I create new SSH keys for each host machine, and then upload them to the relevant places. With this setup I could use a USB stick, or maybe even a Yubikey to manage them
  • Application authentication: Since the Docker image is public, it doesn’t come with any secrets built in. This means I still need to authenticate with tools like NPM and Travis. It might be interesting to manage these together with my SSH keys
  • SSH host: with some tweaking it might be possible to run the same container on cloud services. Then I’d need a way to get my SSH public keys there and start an SSH server

If you have ideas on how to best implement the above, please get in touch.

0 Add to favourites0 Bury

January 14 2018

23:34

November 17 2017

20:02

Semrush, MJ12 and DotBot just slow down your server

I recently migrated a server to a new VHost that was supposed to improve the performance – however after the upgrade the performance actually was worse.

Looking at the system load I discovered that the load average was at about 3.5 – with only 2 cores available this corresponds to server overload by almost 2x.

Further looking at the logs revealed that this unfortunately was not due to the users taking interest in the site, but due to various bots hammering on the server. Actual users would be probably drawn away by the awful page load times at this point.

Asking the bots to leave

To improve page loading times, I configured my robots.txt as following

User-agent: *
Disallow: /

This effectively tells all bots to skip my site. You should not do this as you will not be discoverable at e.g. Google.

But here I just wanted to allow my existing users to use the site. Unfortunately the situation only slightly improve; the system load was still over 2.

From the logs I could tell that all bots were actually gone, except for

  • SemrushBot by semrush.com
  • MJ12Bot by majestic.com
  • DotBot by Moz.com

But those were enough to keep the site (PHP+MySQL) overloaded.

The above bots crawl the web for their respective SEO analytics company which sell this information to webmasters. This means that unless you are already a customer of these companies, you do not benefit from having your site crawled.

In fact, if you are interested in SEO analytics for your website, you should probably look elsewhere. In the next paragraph we will block these bots and I am by far not the first one recommending this.

Making the bots leave

As the bots do not respect the robots.txt, you will have to forcefully block them. Instead of the actual webpages, we will give them a 410/ 403 which prevents them touching any PHP/ MySQL resources.

On nginx, add this to your server section:

if ($http_user_agent ~* (SemrushBot|MJ12Bot|DotBot)) {
     return 410;
}

For Apache2.4+ do:

BrowserMatchNoCase SemrushBot bad_bot
BrowserMatchNoCase MJ12Bot bad_bot
BrowserMatchNoCase DotBot bad_bot
Order Deny,Allow
Deny from env=bad_bot

For additional fun you could also given them a 307 (redirect) to their own websites here.

0 Add to favourites0 Bury

November 07 2017

21:04

Q1 2018 Community Council Election Announcement

Dear Maemoans and fellow humans. The time has come again to elect a new Community Council for Q1/2018. The schedule for the voting process is as follows, according to the election rules: The nomination period of at least 2 weeks starts tomorrow, on the 8th of November 2017 and will continue until the 3rd of December 2017. The one week election starts on Friday, the 8th of December 2017 and will continue until the 14th of December 2017. In order for us to keep the community strong, it would be great to have new people with fresh ideas to carry on the torch. Please consider volunteering for the position of Maemo Council. On behalf of the outgoing Community Council, mosen0 Add to favourites0 Bury

November 05 2017

01:47

C++ matrix maths – library performance

Recently I have been look on the Ogre Matrix class which has a fairly un-optimized, but straightforward implementation, that you can see here.
I was wondering how it compares.

Of course somebody had a similar question in mind before. Martin Foot that is. While the discussion still applies today, I felt like the results could have changed since 2012 as libraries and compilers have moved on.

So I forked his code to update the libs to the latest versions and came up with the following results:

Library add (x86_64, SSSE3) mult (x86_64, SSSE3) add (armeabi-v7a, NEON) mult (armeabi-v7a, NEON) Eigen3 17 ms 53 ms 173 ms 399 ms GLM 50 ms 186 ms 232 ms 399 ms Ogre 50 ms 184 ms 232 ms 399 ms CML1 116 ms 348 ms 178 ms 489 ms

The used compiler was gcc with optimization level -O2.

As we can see Eigen3 just downgrades the rest on x86_64 – probably due its explicit vectorization. Notably, CLM1 is having some issues and even falls behind the naive implementations.
On ARM the results are more tight. With Eigen3 and CLM1 being about 25% faster at addition. However CML1 again has some issues with the mult test.

We end up with Eigen3 being the overall winner and GLM being second (Ogre does not count as it is not a Math library).

Also you should migrate away from CLM1 as the development focus shifted to CLM2 and the issues found above are probably not going to be resolved.

0 Add to favourites0 Bury

October 27 2017

14:43

Switching Apache2 to php-fpm for performance

there are many articles on the internet telling you to switch from Apache & mod_php to nginx to get better performance.

However the main reason for performance improvement is not nginx itself but rather the way it integrates PHP.

Different ways to integrate PHP

Apache traditionally used mod_php to embed the PHP interpreter inside Apache HTTP request handler. This way it can directly interpret PHP scripts whereas with CGI it would have to start a new PHP interpreter process first – per request.

The drawback however is that the PHP interpreter is embedded in all request handlers – even those that just serve static files. This obviously blows up memory consumption which in turn can lower performance.

Nginx on the other hand uses the FCGI approach where a pool of PHP processes is started along the webserver using the FCGI process manager, FPM. The webserver then delegates individual requests using the FCGI protocol as needed.
This avoids the PHP interpreter startup costs as well as starting it without a need and is the reason nginx is faster then mod_php.

However since Apache 2.4 one can also use FCGI to integrate PHP and get virtually the same characteristics like nginx. Sticking with Apache saves you migrating all the .htaccess rules and means an easier setup for many webapps.

Furthermore since Apache 2.4.10 one can use mod_proxy_fcgi for a reverse-proxy configuration which further reduces the occupied PHP workers in the FPM pool for better performance.

Configuration on Ubuntu 16.04

Switching to FCGI on Ubuntu 16.04 is quite easy. The needed module are installed by default and just need to be enabled:

a2enmod proxy_fcgi && a2dismod php7.0

Then inside your-site.conf add

 # PHP-FPM
 <FilesMatch "\.php$">
     SetHandler "proxy:unix:/var/run/php/php7.0-fpm.sock|fcgi://localhost/"
 </FilesMatch>
 <Proxy "fcgi://localhost/">
 </Proxy>

this connects Apache in reverse proxy mode to the PHP-FPM pool using unix domain sockets for optimal performance. See the Apache Wiki for details.

Note that php-fpm by default only creates 5 PHP worker processes, which in turn limits the maximal simultaneous connections. You might want to raise this by adapting pm.max_children in /etc/php/7.0/fpm/pool.d/www.conf.

Typically you set this to RAM size / avg. process size. You can find out the latter via:

ps -ylC php-fpm7.0 --sort:rss

Performance Measurements

To measure the results I did a force reload of my single user Nextcloud instance and measured the Load time via Chrome developer tools:

Page mod_php mod_proxy_fcgi Files 701 ms 605 ms 0.86 News 1.77 s 1.67 s 0.94

as one can see depending on the amount of static/ dynamic files and internal/ external requests we can bring down the page load time by up to 15%.

0 Add to favourites0 Bury

October 23 2017

19:31

Asynchronous commands

With asynchronous commands we have typical commands from the Model View ViewModel world that return asynchronously.

Whenever that happens we want result reporting and progress reporting. We basically want something like this in QML:

Item {
  id: container
  property ViewModel viewModel: ViewModel {}

  Connections {
    target: viewModel.asyncHelloCommand
    onExecuteProgressed: {
        progressBar.value = value
        progressBar.maximumValue = maximum
    }
  }
  ProgressBar {
     id: progressBar
  }
  Button {
    enabled: viewModel.asyncHelloCommand.canExecute
    onClicked: viewModel.asyncHelloCommand.execute()
  }
}

How do we do this? First we start with defining a AbstractAsyncCommand (impl. of protected APIs here):

class AbstractAsyncCommand : public AbstractCommand {
    Q_OBJECT
public:
    AbstractAsyncCommand(QObject *parent=0);

    Q_INVOKABLE virtual QFuture<void*> executeAsync() = 0;
    virtual void execute() Q_DECL_OVERRIDE;
signals:
    void executeFinished(void* result);
    void executeProgressed(int value, int maximum);
protected:
    QSharedPointer<QFutureInterface<void*>> start();
    void progress(QSharedPointer<QFutureInterface<void*>> fut, int value, int total);
    void finish(QSharedPointer<QFutureInterface<void*>> fut, void* result);
private:
    QVector<QSharedPointer<QFutureInterface<void*>>> m_futures;
};

After that we provide an implementation:

#include <QThreadPool>
#include <QRunnable>

#include <MVVM/Commands/AbstractAsyncCommand.h>

class AsyncHelloCommand: public AbstractAsyncCommand
{
    Q_OBJECT
public:
    AsyncHelloCommand(QObject *parent=0);
    bool canExecute() const Q_DECL_OVERRIDE { return true; }
    QFuture<void*> executeAsync() Q_DECL_OVERRIDE;
private:
    void* executeAsyncTaskFunc();
    QSharedPointer<QFutureInterface<void*>> current;
    QMutex mutex;
};

#include "asynchellocommand.h"

#include <QtConcurrent/QtConcurrent>

AsyncHelloCommand::AsyncHelloCommand(QObject* parent)
    : AbstractAsyncCommand(parent) { }

void* AsyncHelloCommand::executeAsyncTaskFunc()
{
    for (int i=0; i<10; i++) {
        QThread::sleep(1);
        qDebug() << "Hello Async!";
        mutex.lock();
        progress(current, i, 10);
        mutex.unlock();
    }
    return nullptr;
}

QFuture<void*> AsyncHelloCommand::executeAsync()
{
    mutex.lock();
    current = start();
    QFutureWatcher<void*>* watcher = new QFutureWatcher<void*>(this);
    connect(watcher, &QFutureWatcher<void*>::progressValueChanged, this, [=]{
        mutex.lock();
        progress(current, watcher->progressValue(), watcher->progressMaximum());
        mutex.unlock();
    });
    connect(watcher, &QFutureWatcher<void*>::finished, this, [=]{
        void* result=watcher->result();
        mutex.lock();
        finish(current, result);
        mutex.unlock();
        watcher->deleteLater();
    });
    watcher->setFuture(QtConcurrent::run(this, &AsyncHelloCommand::executeAsyncTaskFunc));
    QFuture<void*> future = current->future();
    mutex.unlock();

    return future;
}

You can find the complete working example here.

0 Add to favourites0 Bury

October 17 2017

10:48

Attending the GStreamer Conference 2017

This weekend I’ll be in Node5 (Prague) presenting our Media Source Extensions platform implementation work in WebKit using GStreamer.

The Media Source Extensions HTML5 specification allows JavaScript to generate media streams for playback and lets the web page have more control on complex use cases such as adaptive streaming.

My plan for the talk is to start with a brief introduction about the motivation and basic usage of MSE. Next I’ll show a design overview of the WebKit implementation of the spec. Then we’ll go through the iterative evolution of the GStreamer platform-specific parts, as well as its implementation quirks and challenges faced during the development. The talk continues with a demo, some clues about the future work and a final round of questions.

Our recent MSE work has been on desktop WebKitGTK+ (the WebKit version powering the Epiphany, aka: GNOME Web), but we also have MSE working on WPE and optimized for a Raspberry Pi 2. We will be showing it in the Igalia booth, in case you want to see it working live.

I’ll be also attending the GStreamer Hackfest the days before. There I plan to work on webm support in MSE, focusing on any issue in the Matroska demuxer or the vp9/opus/vorbis decoders breaking our use cases.

See you there!

UPDATE 2017-10-22:

The talk slides are available at https://eocanha.org/talks/gstconf2017/gstconf-2017-mse.pdf and the video is available at https://gstconf.ubicast.tv/videos/media-source-extension-on-webkit (the rest of the talks here).

0 Add to favourites0 Bury

August 24 2017

18:57

The RelayCommand in Qt

A few days ago I explained how we can do MVVM techniques like ICommand in Qt.

Today I’ll explain how to make and use a simple version of the, in the XAML MVVM world quite famous, RelayCommand. In the Microsoft Prism4 & 5 world this is DelegateCommand. Both are equivalent. I will only show a non-templated RelayCommand, so no RelayCommand<T> for now. Perhaps I’ll add a templated one to that mvvm project some other day.

What people call a delegate in C# is what C++ people call a Functor. Obviously we will use functors, then. Note that for people actually reading all those links: in C# the Action<T> and Func<T,G> are basically also C# delegates (or, functors, if you fancy C++’s names for this more).

Here is the RelayCommand.h:

#include <functional>
#include <QSharedPointer>
#include <MVVM/Commands/AbstractCommand.h>

class RelayCommand : public AbstractCommand
{
    Q_OBJECT
public:
    RelayCommand(std::function<void()> executeDelegatep,
                 std::function<bool()> canExecuteDelegatep,
                 QObject *parent = 0)
    : AbstractCommand(parent)
    , executeDelegate(executeDelegatep)
    , canExecuteDelegate(canExecuteDelegatep) {}

    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
public slots:
    void evaluateCanExecute();
private:
    std::function<void()> executeDelegate;
    std::function<bool()> canExecuteDelegate;
};

The implementation is too simple to be true:

#include "RelayCommand.h"

bool RelayCommand::canExecute() const
{
    return canExecuteDelegate();
}

void RelayCommand::evaluateCanExecute()
{
    emit canExecuteChanged( canExecute() );
}

void RelayCommand::execute()
{
    executeDelegate();
}

Okay, so how do we use this? First we make a ViewModel. Because in this case we will define the command in C++. That probably means you want a ViewModel.

I added a CompositeCommand in the mix. For a Q_PROPERTY isn’t a CommandProxy really needed, as ownership stays in C++ (when for example you pass this as parent). For a Q_INVOKABLE you would need it to wrap the QSharedPointer<AbstractCommand>.

Note. I already hear you think: wait a minute, you are not passing this to the QObject’s constructor, it’s not a QScopedPointer and you have a new but no delete. That’s because CommandProxy converts the ownership rules to QQmlEngine::setObjectOwnership (this, QQmlEngine::JavaScriptOwnership) for itself. I don’t necessarily recommend its usage here (for it’s not immediately clear), but at the same time this is just a demo. You can try printing a warning in the destructor and you’ll see that the QML garbage collector takes care of it.

#include <QObject>
#include <QScopedPointer>

#include <MVVM/Commands/CommandProxy.h>
#include <MVVM/Commands/CompositeCommand.h>
#include <MVVM/Commands/RelayCommand.h>
#include <MVVM/Models/CommandListModel.h>

class ViewModel: public QObject
{
    Q_OBJECT

    Q_PROPERTY(CommandProxy* helloCommand READ helloCommand CONSTANT)
public:
    ViewModel(QObject *parent=0):QObject(parent),
        helloCmd(new CompositeCommand()){

        QSharedPointer<CompositeCommand> cCmd = helloCmd.dynamicCast<CompositeCommand>();
        cCmd->add( new RelayCommand ([=] { qWarning() << "Hello1 from C++ RelayCommand"; },
                            [=]{ return true; }));
        cCmd->add( new RelayCommand ([=] { qWarning() << "Hello2 from C++ RelayCommand"; },
                            [=]{ return true; }));
        proxyCmd = new CommandProxy (helloCmd);
    }
    CommandProxy* helloCommand() {
        return proxyCmd;
    }
private:
    QSharedPointer<AbstractCommand> helloCmd;
    CommandProxy *proxyCmd;
};

Let’s also make a very simple View.qml that uses the ViewModel

import QtQuick 2.3
import QtQuick.Window 2.0
import QtQuick.Controls 1.2

import Example 1.0

Item {
    property ViewModel viewModel: ViewModel {}

    Button {
        enabled: viewModel.helloCommand.canExecute
        onClicked: viewModel.helloCommand.execute()
    }
}
0 Add to favourites0 Bury

August 18 2017

20:06

AbstractCommand Model View ViewModel techniques

In the .NET XAML world, you have the ICommand, the CompositeCommand and the DelegateCommand. You use these commands to in a declarative way bind them as properties to XAML components like menu items and buttons. You can find an excellent book on this titled Prism 5.0 for WPF.

The ICommand defines two things: a canExecute property and an execute() method. The CompositeCommand allows you to combine multiple commands together, the DelegateCommand makes it possible to pass two delegates (functors or lambda’s); one for the canExecute evaluation and one for the execute() method.

The idea here is that you want to make it possible to put said commands in a ViewModel and then data bind them to your View (so in QML that’s with Q_INVOKABLE and Q_PROPERTY). Meaning that the action of the component in the view results in execute() being called, and the component in the view being enabled or not is bound to the canExecute bool property.

In QML that of course corresponds to a ViewModel.cpp for a View.qml. Meanwhile you also want to make it possible to in a declarative way use certain commands in the View.qml without involving the ViewModel.cpp.

So I tried making exactly that. I’ve placed it on github in a project I plan to use more often to collect MVVM techniques I come up with. And in this article I’ll explain how and what. I’ll stick to the header files and the QML file.

We start with defining a AbstractCommand interface. This is very much like .NET’s ICommand, of course:

#include <QObject>

class AbstractCommand : public QObject {
    Q_OBJECT
    Q_PROPERTY(bool canExecute READ canExecute NOTIFY canExecuteChanged)
public:
    AbstractCommand(QObject *parent = 0):QObject(parent){}
    Q_INVOKABLE virtual void execute() = 0;
    virtual bool canExecute() const = 0;
signals:
    void canExecuteChanged(bool canExecute);
};

We will also make a command that is very easy to use in QML, the EmitCommand:

#include <MVVM/Commands/AbstractCommand.h>

class EmitCommand : public AbstractCommand
{
    Q_OBJECT
    Q_PROPERTY(bool canExecute READ canExecute WRITE setCanExecute NOTIFY privateCanExecuteChanged)
public:
    EmitCommand(QObject *parent=0):AbstractCommand(parent){}

    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
public slots:
    void setCanExecute(bool canExecute);
signals:
    void executes();
    void privateCanExecuteChanged();
private:
    bool canExe = false;
};

We make a command that allows us to combine multiple commands together as one. This is the equivalent of .NET’s CompositeCommand, here you have our own:

#include <QSharedPointer>
#include <QQmlListProperty>

#include <MVVM/Commands/AbstractCommand.h>
#include <MVVM/Commands/ListCommand.h>

class CompositeCommand : public AbstractCommand {
    Q_OBJECT

    Q_PROPERTY(QQmlListProperty<AbstractCommand> commands READ commands NOTIFY commandsChanged )
    Q_CLASSINFO("DefaultProperty", "commands")
public:
    CompositeCommand(QObject *parent = 0):AbstractCommand (parent) {}
    CompositeCommand(QList<QSharedPointer<AbstractCommand> > cmds, QObject *parent=0);
    ~CompositeCommand();
    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
    void remove(const QSharedPointer<AbstractCommand> &cmd);
    void add(const QSharedPointer<AbstractCommand> &cmd);

    void add(AbstractCommand *cmd);
    void clearCommands();
    QQmlListProperty<AbstractCommand> commands();

signals:
    void commandsChanged();
private slots:
    void onCanExecuteChanged(bool canExecute);
private:
    QList<QSharedPointer<AbstractCommand> > cmds;
    static void appendCommand(QQmlListProperty<AbstractCommand> *lst, AbstractCommand *cmd);
    static AbstractCommand* command(QQmlListProperty<AbstractCommand> *lst, int idx);
    static void clearCommands(QQmlListProperty<AbstractCommand> *lst);
    static int commandCount(QQmlListProperty<AbstractCommand> *lst);
};

We also make a command that looks a lot like ListElement in QML’s ListModel:

#include <MVVM/Commands/AbstractCommand.h>

class ListCommand : public AbstractCommand
{
    Q_OBJECT
    Q_PROPERTY(AbstractCommand *command READ command WRITE setCommand NOTIFY commandChanged)
    Q_PROPERTY(QString text READ text WRITE setText NOTIFY textChanged)
public:
    ListCommand(QObject *parent = 0):AbstractCommand(parent){}
    void execute() Q_DECL_OVERRIDE;
    bool canExecute() const Q_DECL_OVERRIDE;
    AbstractCommand* command() const;
    void setCommand(AbstractCommand *newCommand);
    void setCommand(const QSharedPointer<AbstractCommand> &newCommand);
    QString text() const;
    void setText(const QString &newValue);
signals:
    void commandChanged();
    void textChanged();
private:
    QSharedPointer<AbstractCommand> cmd;
    QString txt;
};

Let’s now also make the equivalent for QML’s ListModel, CommandListModel:

#include <QObject>
#include <QQmlListProperty>

#include <MVVM/Commands/ListCommand.h>

class CommandListModel:public QObject {
    Q_OBJECT
    Q_PROPERTY(QQmlListProperty<ListCommand> commands READ commands NOTIFY commandsChanged )
    Q_CLASSINFO("DefaultProperty", "commands")
public:
    CommandListModel(QObject *parent = 0):QObject(parent){}
    void clearCommands();
    int commandCount() const;
    QQmlListProperty<ListCommand> commands();
    void appendCommand(ListCommand *command);
    ListCommand* command(int idx) const;
signals:
    void commandsChanged();
private:
    static void appendCommand(QQmlListProperty<ListCommand> *lst, ListCommand *cmd);
    static ListCommand* command(QQmlListProperty<ListCommand> *lst, int idx);
    static void clearCommands(QQmlListProperty<ListCommand> *lst);
    static int commandCount(QQmlListProperty<ListCommand> *lst);

    QList<ListCommand* > cmds;
};

Okay, let’s now put all this together in a simple example QML:

import QtQuick 2.3
import QtQuick.Window 2.0
import QtQuick.Controls 1.2

import be.codeminded.mvvm 1.0

import Example 1.0 as A

Window {
    width: 360
    height: 360
    visible: true

    ListView {
        id: listView
        anchors.fill: parent

        delegate: Item {
            height: 20
            width: listView.width
            MouseArea {
                anchors.fill: parent
                onClicked: if (modelData.canExecute) modelData.execute()
            }
            Text {
                anchors.fill: parent
                text: modelData.text
                color: modelData.canExecute ? "black" : "grey"
            }
        }

        model: comsModel.commands

        property bool combineCanExecute: false

        CommandListModel {
            id: comsModel

            ListCommand {
                text: "C++ Lambda command"
                command:  A.LambdaCommand
            }

            ListCommand {
                text: "Enable combined"
                command: EmitCommand {
                    onExecutes: { console.warn( "Hello1");
                        listView.combineCanExecute=true; }
                    canExecute: true
                }
            }

            ListCommand {
                text: "Disable combined"
                command: EmitCommand {
                    onExecutes: { console.warn( "Hello2");
                        listView.combineCanExecute=false; }
                    canExecute: true
                }
            }

            ListCommand {
                text: "Combined emit commands"
                command: CompositeCommand {
                    EmitCommand {
                        onExecutes: console.warn( "Emit command 1");
                        canExecute: listView.combineCanExecute
                    }
                    EmitCommand {
                        onExecutes: console.warn( "Emit command 2");
                        canExecute: listView.combineCanExecute
                    }
                }
            }
        }
    }
}

I made a task-bug for this on Qt, here.

0 Add to favourites0 Bury

July 25 2017

14:05

Do not use Meson

Recently the Meson Build System gained some momentum. It is time to stop that.
Not that Meson is a bad piece of software – on the contrary, it is quite well designed.
Still it makes building C/C++ applications worse, by (quoting xkcd) basically creating  this:

It sets out to create a cross-platform, more readable and faster alternative to autotools. But there is already CMake that solves this.

You might say that CMake is ugly, but note that the CMake 2.x you might have tried is not the same CMake 3.x that is available today. Many patterns have improved and are now both more logical and more readable.

Nowadays the difference between Meson and CMake is just a matter of syntactic preference. The Meson authors seem to agree here.

The actual criterion for selecting a build system however should be tooling support and community spread. CMake easily wins here:

After the introduction of the server mode it got native support by QtCreator, CLion, Android Studio (NDK) and even Microsofts Visual Studio. Native means that you do not have to generate any intermediate project files, but the CMakeLists.txt is used directly by the IDE.

On the community spread side we got e.g. KDE, OpenCV, zlib, libpng, freetype and as of recently Boost. These projects using CMake not only guarantees that you can easily use them, but that you can also include them in your build via add_subdirectory such that the become part of your project. This is especially useful if you are cross-compiling – for instance to a Raspberry Pi.

On the other hand, reinventing a wheel that is tailored to the needs of a specific community (Gnome), means that it will fall behind and eventually die. This is what is currently happening to the Vala language that had a similar birth to Meson.

The meson devs might object that Meson generates build files that run faster on a Raspberry Pi. However if your cross compiling is working you do not need that. And honestly, that particular improvement could have been also achieved by providing a patch to the CMake Ninja generator..

Addendum 4-1-2018
Some comments (rightfully) note that Meson has generally a better documentation and avoids some of its pitfalls. However this is mostly due to Meson not being around long enough such that the way you do things in Meson changed. Neither did it see such a widespread use like CMake yet. (think of corner-cases)

But even if you argue that this is precisely the point why you should use Meson, I would argue that improving the existing documentation in CMake and adding more educational warnings is easier then writing something from scratch.

 

0 Add to favourites0 Bury

July 11 2017

20:17

Meet the new Q2 2017 Maemo Community Council

Dear Maemo community, I have the great honor of introducing the new Community Council for the upcoming Q2/2017 period.

**The members of the new council are (in alphabetical order):**

  • Juiceme (Jussi Ohenoja)
  • Mosen (Timo Könnecke)
  • Sicelo (Sicelo Mhlongo)

The voting results can be seen on the [voting page]

I want to thank warmly all the members of the community who participated in this most important action of choosing a new council for us!

The new council shall meet on the #maemo-meeting IRC channel next tuesday 18.06 at 20:00 UTC for the formal handover with the passing council.

Jussi Ohenoja, On behalf of the outgoing Maemo Community Council

0 Add to favourites0 Bury
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl