Cross-Platform? We Don’t Say That Around Here Anymore.

Cross-Platform? We Don’t Say That Around Here Anymore.

In the application development world, you hear the term cross-platform all the time. The idea of “write once, run anywhere” is like an unreachable promised land for management stakeholders, and an unrealistic and frustrating reality for developers.

At Touchlab, we don’t use the term cross-platform. We prefer “multiplatform” – for a couple of important reasons.



First, cross-platform has a lot of baggage because of existing development platforms. PhoneGap, Titanium, Xamarin, Ionic, RubyMotion… the list is long. Initially, these solutions went through a phase of exuberance, followed by disappointment. Developers were frustrated with not being able to express the UI how they wanted, unavailability of features on their wish list, and invariably, teams would end up managing three platforms – cross-platform, iOS and Android. For product owners — and ultimately, the end-users — the UX was rarely satisfying.

Note: If your organization doesn’t care about native, traditional cross-platforms solutions might be okay. Context matters.


Wide Net

Second, cross-platform really is a super-inaccurate term because the solutions that are lumped under it are wildly different. We believe you need much more nuanced terminology to describe coding for a variety of platforms. The way we see it, at one level, when a native developer sees anything that isn’t Java, Kotlin or Swift, they’ll lump all alternatives into cross-platform and immediately (without pause) classify them all as bad based on a singular bad experience with a solution. “If it’s not native, it’s bad. I would know, we tried PhoneGap once.”

And while the world has always been ready for shared code, it’s rarely been implemented well. The good news? There are now better options to make things really work.



At Touchlab, we’re about shared logic that interops natively with the platform you’re working with, homogenized logic and libraries – all of which should occur below the UI and be performant.  Simply put, our multiplatform approach can’t and shouldn’t be compared to “cross-platform.” It’s not an apple-to-apple comparison.

We call it “post-platform thinking”. It doesn’t matter what platform you’re delivering on, because the ultimate goal is to have a well-tested and well-architected backend, middle communication layer, and front-end architectural layer that’s all homogeneous.

So, if as developer can build the logic from the back-end to the front-end, and model the logic for the UI in a microservices model, the developer’s job becomes one of service aggregator rather than implementer. The iOS developer can focus on UI and UX, not on implementing the services themselves.


Kotlin Multiplatform

And that’s what Kotlin Multiplatform does: natively shared, performant, tested architecture with a fantastic developer experience and ecosystem. It enables you to do runtimes efficiently, and have the ability to test – all of which makes shared logic a lot more possible. The architectural stuff is well tested, it’s well engineered, it’s economical and efficient from a development standpoint, so it’s an easy platform to move to another UI.

So why is Kotlin the way to go?  Here’s the thing: anything that requires you to make large decisions and potentially large rewrites, perform large retrainings and rehirings, or anything that has to share UI or doesn’t work well with a native platform is, well, very risky. The fact is, Kotlin Multiplatform is not risky in that same way, and that’s why we’re working with it at Touchlab.

Kotlin Multiplatform is a modern language with enthusiastic community support that allows native optional interoperability with the platform on which you’re working, with straightforward logic and architecture. Android, iOS, desktop platforms, JavaScript, and WebAssembly. For me, the idea behind Kotlin Multiplatform is that you can invest in this way of building logic, and you don’t have to guess which way the industry is going to progress in the next 5 or 10 years. If everything moves to the Web, you can support that that. If it all goes mobile, you can support that too. And that’s true multiplatform.

Calling Engineers to Take our Kotlin Multiplatform Survey!

Calling Engineers to Take our Kotlin Multiplatform Survey!

Survey Link


It’s no secret that we are major Kotlin Multiplatform fans (spoiler alert!); with your input, a survey can help us prioritize our engineering efforts and initiate a conversation with folks who aren’t quite as excited as we are.

You’ll see the basic, information-gathering questions as well as exploratory questions that might allow us to capture trends and thought processes surrounding the multiplatform space.

Thank you in advance for taking the time to complete our short survey below (3-5 minutes). 


And now for a special suprise!!!

Image result for gift gif

As a thank you for completing our survey, you will receive an invitation to attend our Kotlin Multiplatform Webinar.

Date: Thursday, March 28th, 2019 from 1-2 pm EST

Webinar is for developers and engineering managers interested in learning why the future of cross-platform is native and how Kotlin Multiplatform is different than “The Others” (Xamarin, React Native and Flutter).

Kotlin Native Stranger Threads Ep 2

Kotlin Native Stranger Threads Ep 2

Episode 2 — Two Rules

This is part 2 of my Kotlin Native threading series. In part 1 we got some test code running and introduced some of the basic concurrency concepts. In part 2 we’re going a bit deeper on those concepts and peeking a bit under the hoot.

Some reminders:

  1. KN means Kotlin Native.
  2. Emoji usually means a footnote 🐾.
  3. Sample code found here:

Look in the nativeTest folder: src/nativeTest/kotlin/sample

Two Rules

K/N defines two basic rules about threads and state:

  1. Live state belongs to one thread
  2. Frozen state can be shared

These rules exist to make concurrency simpler and safer.

Predictable Mutability

Rule #1 is easy to understand. For all state accessible from a single thread, you simply can’t have concurrency issues. That’s why Javascript is single threaded 😤.

If you generally understand thread confinement, the KN runtime enforces an aggressive form on all mutable state. Feel free to skip to “Seriously Immutable”

As an example that you’re likely familiar with, think of the android and iOS UI systems. Have you ever thought about why there’s a main thread? Imagine you’re implementing that system. How would you architect concurrency?

The UI system is either currently rendering, or waiting for the next render. While rendering, you wouldn’t want anybody modifying the UI state. While waiting, you wouldn’t care.

You could enforce some complex synchronization rules, such that all components’ data locks during rendering, and other threads could call into synchronized methods to update data, and wait while rendering is happening. I won’t even get into how complex that would probably be. The UI system really gets nothing out of it, and synchronizing data has significant costs related to memory consistency and compiler optimization.

Rather than letting other threads modify data, the UI system schedules tasks on a single thread. While the UI system is rendering, it is occupying that thread, effectively “locking” that state. When the UI is done, whatever you’ve scheduled to run can modify that state.

To enforce this scheme, UI components check the calling thread when a method is called, and throw an exception if it’s the wrong thread. While manual, it’s a cheap and simple way to make things “thread safe”.

That’s effectively what’s happening here. Mutable state is restricted to one thread. Some differences are:

  1. In the case of UI, you’ve got a “main” thread and “everything else”. In KN, every thread is it’s own context. Mutable state exists wherever it is created or moved to.
  2. UI (and other system) components need explicitly guard for proper thread access. KN bakes this into the compiler and runtime. It’s technically possible to work around this, but the runtime is pretty good at preventing it 🧨.

Maintaining coherent multithreaded code is one of those rabbit holes that keeps delivering fresh horrors the deeper you go. Makes for good reading, though.

Seriously Immutable

Rule #2 is also pretty easy to understand. More generally stated, the rule is immutable state can be shared. If something can’t be changed, there are no concurrency issues, even if multiple threads are looking at it.

That’s great in principle, but how do you verify that a piece of state is immutable? It would be possible to check at runtime that some state is comprised entirely of vals or whatever, but that introduces a number of practical issues. Instead, KN introduces the concept of frozen state. It’s a runtime designation that enforces immutability, at run time, and quickly lets the KN runtime know state can be shared.

As far as the KN runtime is concerned, all non-frozen state is possibly mutable, and restricted to one thread.


Freeze is a process specific to KN. There’s a function defined on all objects.

public fun <T> T.freeze(): T

To freeze some state, call that method. The runtime then recursively freezes everything that state touches.


Once frozen, the object runtime metadata is modified to reflect it’s new status. When handing state to another thread, the runtime checks that it’s safe to do so. That (basically) means either there are no external references to it and ownership can be given to another thread, or that it’s frozen and can be shared 🤔.

Freezing is a one way process. You can’t un-freeze.

Everything Is Frozen

The freeze process will capture everything in the object graph that’s being referenced by the target object, and recursively apply freeze to everything referenced.

data class MoreState(val someState: SomeState, val a:String)

fun recursiveFreeze(){
  val moreState = MoreState(SomeState("child"), "me")

In general, freezing state on data objects is pretty simple. Where this can get tricky is with things like lambdas. We’ll talk more about this when we discuss concurrency in the context of application development, but here’s a quick example I give in talks.

val worker = Worker.start()

fun lambdaFail(){
  var count = 0

  val job: () -> Int = {
    for (i in 0 until 10) {

  val future = worker.execute(TransferMode.SAFE, { job.freeze() }){

  assertEquals(0, count)

There’s a bit to unpack in the code above. The KN compiler tries to prevent you from getting into trouble, so you have to work pretty hard to get the count var into the lambda and throw it over the wall to execute.

Any function you pass to another thread, just like any other state, needs to follow the two rules. In this case we’re freezing the lambda job before handing it to the worker. The lambda job captures count. Freezing job freezes count ⚛️.

The lambda actually throws an exception, but exceptions aren’t automatically bubbled up. You need to check Future for that, which will become easier soon.

Freeze Fail

Freezing is a process. It can fail. If you want to make sure a particular object is never frozen recursively, you can call ensureNeverFrozen on it. This will tell the runtime to throw an exception if anybody tries to freeze it.

fun ensureNeverFrozen()
  val noFreeze = SomeData("Warm")
  assertFails {

Remember that freezing acts recursively, so if something is getting frozen unintentionally, ensureNeverFrozen can help debug.

Global State

Kotlin lets you define global state. You can have global vars and objects. KN has some special state rules for global state.

var and val defined globally are only available in the main thread, and are mutable 🤯. If you try to access them on another thread, you’ll get an exception.

val globalState = SomeData("Hi")

data class SomeData(val s:String)
//In test class...
fun globalVal(){
    assertFails {

object definitions are frozen on init by default. It’s a convenient place to put global service objects.

object GlobalObject{
  val someData = SomeData("arst")
//In test class...
fun globalObject(){
  val wval = worker.execute(TransferMode.SAFE, {}){
  assertSame(GlobalObject, wval)

You can override these defaults with annotations ThreadLocal and SharedImmutable. ThreadLocal means every thread gets it’s own copy. SharedImmutable means that state is frozen and shared between threads (the default for global objects).

val thGlobalState = SomeData("Hi")
fun thGlobalVal(){
  val wval = worker.execute(TransferMode.SAFE,{}){

  assertNotSame(thGlobalState, wval)
val sharedGlobalState = SomeData("Hi")
fun sharedGlobalVal(){
  val wval = worker.execute(TransferMode.SAFE,{}){

  assertSame(sharedGlobalState, wval)


Frozen state is mostly immutable. The K/N runtime defines a set of atomic classes that let you modify their contents, yet to the runtime they’re still frozen. There’s AtomicInt and AtomicLong, which let you do simple math, and the more interesting class, AtomicReference.

AtomicReference holds an object reference, that must itself be a frozen object, but which you can change. This isn’t super useful for regular data objects, but can be very handy for things like global service objects (database connections, etc).

It’ll also be useful in the early days as we rethink best practices and architectural patterns. KN’s state model is significantly different compared to the JVM and Swift/ObjC. Best practices won’t emerge overnight. It will take some time to try things, write blog posts, copy ideas from other ecosystems, etc.

Here’s some basic sample code:

data class SomeData(val s:String)
fun atomicReference(){
  val someData = SomeData("Hello").freeze()
  val ref = AtomicReference(someData)
  assertEquals("Hello", ref.value.s)

Once created, you reference the data in the AtomicReference by the ‘value’ property.

Note: The value you’re passing into the AtomicReference must itself be frozen.

fun notFrozen(){
  val someData = SomeData("Hello")
  assertFails {

So far that’s not doing a whole lot. Here’s where it gets interesting.

class Wrapper(someData: SomeData){
  val reference = AtomicReference(someData)
fun swapReference(){
  val initVal = SomeData("First").freeze()
  val wrapper = Wrapper(initVal).freeze()
  assertEquals(wrapper.reference.value.s, "First")
  wrapper.reference.value = SomeData("Second").freeze()
  assertEquals(wrapper.reference.value.s, "Second")

The Wrapper instance is initialized and frozen, then we change the value it references. Again, it’s very important to remember the values going into the AtomicReference need to be themselves frozen, but this allows us to change shared state.

Should You?

The concepts introduced in KN around concurrency are there for safety reasons. On some level, using atomics to share mutable state is working around those rules. I was tempted to abuse them a bit early on, and they can be very useful in certain situations, but try not to go crazy.

AtomicReference is a synchronized structure. Compared to “normal” state, it’s access and modification performance will be slow. That is something to keep in mind when making architecture decisions.

A small side note. According to the docs you should clear out AtomicReference instances because they can leak memory. They won’t always leak memory. As far as I can tell, leaks may happen if the value you’re storing has cyclical references. Otherwise, data should clear fine when your done with it. Also, in general you’re storing global long lived data in AtomicReference, so it’s often hanging out “forever” anyway.

Transferring Data

According to rule #1, mutable state can only exist in one thread, but it can be detached and given to another thread. This allows mutable state to remain mutable.

Detaching state actually removes it from the memory management system and while detached, you can think of it as being in it’s own little limbo world. If for some reason you lose the detached reference before it’s reattached, it’ll just hang out there taking up space 🛰.

Syntactically, detaching is very similar to what we experienced with the Worker producer. That makes sense, because Worker.execute is detaching whatever is returned from producer (assuming it’s not frozen). You need to make sure there are no external references to the data you’re trying to detach. The same syntactically complex situations apply.

You detach state by creating an instance of DetachedObjectGraph, which takes a lambda argument: the producer.

data class SomeData(val s:String)
fun detach(){
  val ptr = DetachedObjectGraph {SomeData("Hi")}.asCPointer()
  assertEquals("Hi", DetachedObjectGraph<SomeData>(ptr).attach().s)

Like Worker’s producer, non-zero reference counts will fail the detach process.

fun detachFails(){
  val data = SomeData("Nope")
  assertFails { DetachedObjectGraph {data} }

The detach process visits each object in the target’s graph when performing the operation. That is something that can be a performance consideration in some cases 🐢.

I personally don’t use detach often directly, although I’ll go over some techniques we use in app architectures in a future post.


We mentioned TransferMode a lot back in the last post. Now that we have some more background, we can dig into it a bit more.

Both Worker.execute and DetachedObjectGraph take a TransferMode parameter. DetachedObjectGraph simply defaults to SAFE, while Worker.execute requires it explicitly. I’m not sure why Worker.execute doesn’t also default. If you want a smart sounding question to ask at KotlinConf next year, there you go.

What do they do? Essentially, UNSAFE let’s you bypass the safety checks on state. I’m not 100% sure why you’d want to do that, but you can.

It’s possible that you’re very sure your data would be safe to share across threads, but the purpose of the KN threading rules is to enable the runtime to verify that safety. Simply passing data with UNSAFE would defeat that purpose, but these’s a far more important reason not to do that.

We mentioned back in post #1 that memory is reference counted. There’s actually a field in the object header that keeps count of how many references exist to an object, and when that number reaches zero, the memory is reclaimed.

That ref count itself is a piece of state. It is subject to the same concurrency rules that any other state is. The runtime assumes that non-frozen state is local to the current thread and, as a result, reference counts don’t need to be atomic. As reference counting happens often and is not avoidable, being able to do local math vs atomic should presumably have a significant performance difference 🚀.

Even if the state you’re sharing is immutable but not frozen, if two threads can see the same non-frozen state you can wind up with memory management race conditions. That means leaks (bad) or trying to reference objects that have been freed (way, way worse). It you want to see that in action, uncomment the following block at the bottom of WorkerTest.kt:

fun unsafe(){
  for (i in 0 until 1000){
    println("loop run $i")

private fun unsafeLoop() {
  val args = Array(1000) { i ->
    JobArg("arg $i")

  val f = worker.execute(TransferMode.UNSAFE, { args }) {
    it.forEach {

  args.forEach {


I won’t claim to know everything in life, but I do know you don’t want memory management race conditions.

Also, as hinted with 🤔, the shared type throws a new special case wrinkle into all of this, but that’s out of scope for today, and only used with one class for now.

Maybe This Is All Going Away

Now that we’ve covered what these concurrency rules are, I’d like to address a more existential topic.

I started really digging into KN about a year ago, and I wasn’t a fan of the threading model at first. There’s a lot to learn here, and the more difficult this is to pick up, the less likely we’ll see adoption, right?!

I think having Native and JVM with different models can be confusing, although you do get used to it. Better libraries, better debug output, being able to debug in the IDE at all: these things will help.

There is some indication that these rules may change. There has been little clarification as to what that means. Reading through those comments, and from my community conversations, there is definitely some desire for the Kotlin team to “give up” and have “normal” threading. However, that’s not a universal opinion, it’s pretty unlikely to happen (100% anyway) , and I think it would be bad if it did. If anything, I’d rather see the ability to run the JVM in some sort of compatibility mode, which applies the same rules.

What I think we could all agree on is the uncertainty won’t help adoption. If there’s likely to be significant change to how Native works in the near term, going through the effort of learning this stuff isn’t worth it. I’m really hoping for some clarification soon.

Up Next

We’ve covered the basics of state and threading in KN. That’s important to understand, but presumably you’re looking to share logic using multiplatform. JVM and JS have different state rules. Future posts will discuss how we create shared code while respecting these different systems, and how we implement concurrency in a shared context. TL;DR we’re making apps!

🐾 I’m prone to tangents. Footnotes let you decide to go off with me.

😤 Yes, workers. All the state in the worker is still local to a single thread, and the vast majority of JS simply lives in the single thread world.

🧨 In more recent versions of KN, the runtime seems to catch instances of code accessing state on the wrong thread due to native OS interop code, but I’m not sure how comprehensive that is. In earlier versions it didn’t really check much, and it was pretty easy to blow up your code. If you want to experience that for yourself, try passing some non-frozen state around with TransferMode.UNSAFE.

🤔 There’s a recent addition to the runtime that defines another state called shared. It’s effectively allowing shared non-frozen state. You can’t call it and create shared objects 🔓. It’s an internal method. Currently you can create a shared byte array using MutableData, and that’s about it. I’d be surprised if that’s the only thing “share” is used for. We’ll see.

🔓 That’s not 100% true. I’m pretty sure you could call it, but I’m not saying how 🙂

⚛️ You can make this example work with an atomic val, although I’m guessing you wouldn’t actually write it this way. But…

fun atomicCount(){
  val count = AtomicInt(0)

  val job: () -> Int = {
    for (i in 0 until 10) {

  val future = worker.execute(TransferMode.SAFE, { job.freeze() }){

  assertEquals(10, future.result)
  assertEquals(10, count.value)

🤯 This wasn’t true early on. Each worker would get it’s own copy on init. That’s a problem if you do something like define a global worker val (which gets created on init, and defines a worker val, which gets created on init…).

🛰 I’ve made a horrible Star Trek analogy. It’s like beaming. It can live on one side or the other, but can do nothing in transit. If it results in the same entity in two places you have a big problem, and it can be lost in transit forever.

🚀 I’m definitely not suggesting you do any premature optimization, but if you were constructing some performance critical code and had a big pile of state hanging around, it’s maybe something to consider. Non-frozen state wouldn’t have the same accounting cost. Now forget you read this.

Kotlin Native Stranger Threads Ep 2

Kotlin Native Stranger Threads Ep 1

Episode 1 — Worker

My original post about Kotlin Native (KN) concurrency was written a while ago, with a much earlier version of Native and Multiplatform. Now that Kotlin Multiplatform is ready for production development, it’s time to revisit how Native concurrency works and how to use it in your application development.

Concurrency and state in KN is significantly different compared to what you’re likely used to. Languages like Java, Swift, Objective-C, and C++ give the developer tools to ensure proper concurrent state access, but using them properly is up to the developer. Writing concurrent code in these languages can be difficult and error prone. KN, by contrast, introduces constraints that allow the runtime to verify that concurrent access is safe, while also providing for reasonable flexibility. It is trying to find a balance between safety and access. What that means is changing, and even within Jetbrains there appear to be conflicting visions. What is clear, however, is that Jetbrains is committed to Saner Concurrency, and to building a platform for the future.

In this series we’ll cover the rules and structures of KN’s concurrency and state model, and how they apply in the context of application development.

Just FYI, if you see emoji in the doc, that’s generally a footnote with unnecessary info 😛.

Episode 1 — Workers ⏮

Kotlin Native (KN) concurrency is kind of a big topic. For developers familiar with Java and Swift/ObjC concurrency, there are several new concepts to learn, which presents a problem out of the gate. Where to start?

In general, I like to be able to play with the code right away, so we’ll start with a core KN concurrency mechanism: Worker. We’ll encounter some concepts before we’ve had a chance to explain them, but we’ll sort that out later on in the series.

The code samples in this post can be found here. You’ll need to have a MacOS machine to run them. Adding other platforms should be pretty simple, if anybody wants to give it a shot.

Most of the examples are implemented as unit tests 🔍. You can run them by typing:

./gradlew build


KN supports concurrency out of the box using a structure called “Worker”. A Worker is a job queue on which you can schedule jobs to run on a different thread.

The Worker related tests can be found here.

Creating a worker is relatively straightforward.

import kotlin.native.concurrent.Worker
class TestWorker {    
  val worker = Worker.start()

Each Worker instance gets a thread 📄. You can schedule jobs to be run on the worker’s thread.

worker.execute(TransferMode.SAFE, {"Hello"}) {
    //Do something on Worker thread

There are a few things to take note of in that call. Here’s the function definition for execute:

fun <T1, T2> execute(
        mode: TransferMode,
        producer: () -> T1,
        job: (T1) -> T2): Future<T2>

We’ll discuss TransferMode in part 2. In summary, there are two options: SAFE and UNSAFE. Just assume it’s always TransferMode.SAFE.

The producer parameter is a lambda that returns the input to the background job (generic type T1). That’s how you pass data to your background task.

It’s critically important to understand that whatever gets returned from the producer lambda is intended to be passed to another thread, and as a result, must follow KN state and concurrency rules. That means it either needs to be frozen, or needs to be fully detachable. In theory, being detachable is simple, but in practice it can be tricky. We’ll talk about that in a bit.

The job parameter is the work you intend to do on the background thread. It will take the result of the producer (T1) as a parameter and return a result (T2) that will be available from the Future.

Well discuss this more later on, but it’s a super important topic and can bear some repetition. It is very easy to accidentally capture outside state in the job lambda. This is not allowed and the compiler will complain. You’ll need to be extra careful to avoid doing that.

Execute’s return is ‘Future<T2>’. Your calling thread can block and wait for this value, but in an interactive application we’ll need a way back to the calling context that doesn’t interrupt the ui.


The producer’s job is very simple. Isolate a parameter value to hand off to the background job. You’ll see the producer lambda both here and when we need to detach an object from the object graph. It’s a little confusing at first, but understanding what’s happening with the producer will help clear up KN’s broader concurrency concepts.

Take note of the fact that the producer is a lambda and not just a value. It doesn’t look like this.

worker.execute(TransferMode.SAFE, "Hello"){
    //Do something

That is (presumably) to make isolating and detaching the object reference easier.

The producer is run in whatever thread you’re calling it from.The result of that lambda is then checked to make sure it can be safely given to the worker’s thread. However, to be clear, all of that activity happens in your current thread. We only engage the worker’s thread when we get to the background job.

Haven’t left the calling thread yet

How do we determine that some state can be safely given to another thread? We have to respect KN’s two basic rules:

    1. Live state belongs to one thread
  1. Frozen state can be shared

Part two is all about the two rules, but in summary:

    1. Live state is the state you’re used to writing
  1. Frozen is, basically, super-immutable. You create frozen state by calling ‘freeze’ on it

Note: We’ll start using data classes rather than String. Strings, as well as other basic value types, are frozen automatically by the runtime.

Here’s a basic example:

data class JobArg(val a: String)
fun simpleProducer() {
  worker.execute(TransferMode.SAFE, { JobArg("Hi") }) {

We create an instance of JobArg inside the producer. There are no external references (nobody has a reference to that instance of JobArg), so the runtime can safely detach and pass the state to the job lambda to be run in another thread.

This, by contrast, fails.

fun frameReferenceFails() {
  val valArg = JobArg("Hi")
  assertFails {
    worker.execute(TransferMode.SAFE, { valArg }) {

When we call execute, valArg is being referenced locally, so the attempt to detach will fail.

This looks like a way to hide the reference, but also fails:

class ArgHolder(var arg:JobArg?){
  fun getAndClear():JobArg{
    val temp = arg!!
    arg = null
    return temp
fun stillVisible() {
  val holder = ArgHolder(JobArg("Hi"))
  assertFails {
    worker.execute(TransferMode.SAFE, { holder.getAndClear() }) {

Why? Well, this gets a bit into the weeds of how KN’s memory model works. Native doesn’t use a garbage collector 🚮. It uses reference counting. Each allocated object has a count of how many other entities have a reference to it. When that count goes to zero, that memory is freed.

iOS developers will have an easier time with this concept, as this is how Swift and ObjC work 🍎.

References to objects obviously include hard field references, but also include local frame references. That’s what’s wrong with the block above. The JobArg appears in the local frame context, however briefly, which still has a reference to it when the producer attempts to detach it.

Outside context has a local reference

This, however, will work:

fun makeInstance() = ArgHolder(JobArg("Hi"))
fun canDetach() {
  val holder = makeInstance()
  worker.execute(TransferMode.SAFE, { holder.getAndClear() }) {

The local ref is cleared in ‘makeInstance’. So again, if you’re wondering why the producer is a lambda, it’s to make avoiding local references easier. Look at simpleProducer again:

fun simpleProducer() {
  worker.execute(TransferMode.SAFE, { JobArg("Hi") }) {

Much simpler.


Passing live data is difficult syntactically. In fact, we don’t have multithreaded coroutines yet because JB still needs to reconcile the two systems 😟. I gave you some pretty weird examples out of the gate on purpose. KN makes passing mutable state between threads difficult, and in general that’s a good thing, because it’s risky. When I need to pass something into a worker I’ll almost always freeze it.

fun frozenFtw() {
  val valArg = JobArg("Hi").freeze()
  worker.execute(TransferMode.SAFE, { valArg }) {

Because frozen data can be shared between threads, the producer can return valArg. This is obviously a simple example, but as you get into Native development, you’ll generally find freezing data to be simpler, and in general, data that you’re passing around should be immutable anyway.

I should mention that you can bypass all of this and simply pass data unsafe with TransferMode.UNSAFE, and it’ll probably work most of the time. Don’t do it, though. It’s called UNSAFE for a reason, so if you can’t clearly explain why you would use it, you never should. We’ll discuss this in detail in part 2.

We spent a lot of time on the producer, but again, the producer introduces a lot of core, and potentially confusing topics. If you can fully grasp what’s going on with that you’ll have covered a lot of ground.

Background Job

What happens with the background lambda, compared to what was happening with the producer, is much simpler. The lambda takes a single parameter, which is the result of the producer (which, btw, can be empty). If the background job returns a value, it’ll be available from the Future.

fun backgroundStuff() {
  val future = worker.execute(TransferMode.SAFE, { 1_000_000 }) {
    var count = 0
    for(i in{
      //Do some long stuff
  assertEquals(1_000_000, future.result)

Here we’re going to loop and count. We pass the number of loops in the producer.

Just FYI, be careful with threads and unit tests 🍌. The ‘future.result’ forces the thread to wait for the background lambda to finish.

Until now, everything happened in the original calling context. The background job finally gets us into the second thread.

Since job is in a different thread, you can’t reference just any state. Only the lambda param of type T1, originally from our friend producer, and global state known to be frozen or thread local. In other words, only state that the KN runtime can verify is safe to access.

As mentioned previously, it’s pretty easy to capture other state in the lambda of your background task. The compiler attempts to prevent this, but only when you’re calling the worker method directly. We’ll dive deeper into that when we talk about actually implementing concurrency in your applications.

In simple examples, capturing extra state won’t be much of a problem. Where this quickly becomes problematic is capturing state when you call background tasks from your application objects. I found this difficult at first, but you get used to it. Frameworks help, and especially when multithreaded coroutines become available, running tasks in the background will be simpler 😴.


The ‘execute’ method returns a Future instance, which can be used to check the status of the background process, as well as get the value returned. The value can be Unit, which means you’ll simply verify that the process completed.

If it’s OK to block the calling thread, the simplest way to get your result is to call the result property on the Future instance. That’s what we’re doing in the test examples.

Alternatively you can poll status on the Future, or set up a result Worker to call back to. However, if you’re intending to use Worker in the context of a mobile application, going “back to the main thread” is somewhat more complex. We’ll discuss that later.


We don’t worry about it too much in the context of our test samples, but you should shut down Workers when you’re done with them. This is only necessary if you’re going to keep the process running but abandon the Worker. It your Worker instances are meant to live along with your process, you can leave them hanging around (they get shut down with the process).

fun requestTermination(){
  val w = Worker.start()

requestTermination returns a Future. If you need to wait for termination, check the result.

You Probably Won’t Use Worker

In the same way you probably don’t create a Thread instance or an ExecutorService very often in Java, libraries will probably keep you away from creating Worker instances directly. Unless KN state rules radically change, however, you won’t get away from those. You will, however, be seeing Worker a lot for the next few posts at least.

Up Next

Worker introduces us to the basics of running concurrent code on Native. Part 2 will go deeper into the why of KN state rules, freezing, detaching, and some more detail about what’s happening under the hood.

😛 But super interesting info!!!

OK. It’s not exactly Episode 1. The earlier post, from about 8 months ago, was supposed to be the start of the series, but things were changing really fast and I got more involved in library development. Yada yada, we’ll call that the pilot and this is the start ot the series.

🔍 The test code is configured with a common source set and a native source set. To get native code tests to run on the command line, the simplest way to do that is to build a macos target. The build process automatically builds and runs a command line executable. JVM is currently disabled because we’re not talking about the JVM 🙂

📄 The docs are pretty clear that you shouldn’t rely on that in the future as it may change, but for the foreseeable future, 1 Worker gets one thread.

🚮 That’s mostly true. There is a garbage collector in the runtime, but I’m pretty sure that’s there to deal with reference cycles. Memory is primarily managed by reference counting.

🍎 There are some important differences to note. KN can deal with reference cycles, so “weak” references aren’t a concern. Also, to be clear, KN objects are ref counted, and it’s conceptually similar to ARC, but it’s a separate system. While running on iOS, KN doesn’t use ARC for it’s ref counts.

😟 A fair number of people have expressed their hope that JB abandons the “Saner Concurrency” effort. The comment in that coroutines issue implies they might, or at least relax the rules somewhat. While I understand this stuff can be confusing, the ultimate goal is to produce a better platform. I would very much like some improved debug info from immutability related exceptions, and some improved library support, but once you get your head around this stuff it’s not that bad.

🍌 Calling for the future result forces the main thread to wait. That’s why this test works correctly. This can all get very tricky when trying to interact with the main thread, etc. There are frameworks and examples in more mature ecosystems to help out, but KN and multiplatform are in early days. Just an FYI.

😴 I’ve been asked if there’s any reason to learn this crazy threading stuff if the coroutines API will largely hide the details. Although we don’t know yet what changes, if any, will happen to the KN concurrency and state model to accomodate coroutines, unless Jetbrains radically changes their plan and abandons everything, you’ll definitely need to understand this stuff.

Should You Develop Your App on iOS or Android First?

Should You Develop Your App on iOS or Android First?

Android or iOS. Which platform to build first?

Great, you have an app idea! You are a tech founder or enterprise mobile leader and you really believe in your vision and team, but now, you’re challenged with deciding Android or iOS.

First, there is no universal truth or answer to this question.

With variable amounts of time, money, and manpower, it can be difficult to determine which platform best suits your strategy. Despite the challenge, with careful comprehension of key considerations, you can definitely figure out the best platform for you.


4 Factors to Consider When Deciding Android or iOS


1. Demographics

Who is your target audience? Who do you see downloading this app? Break this question down into two identifiers: location and socioeconomic status.

Source: DeviceData

Location: If your target audience is based in America, Canada, United Kingdom or Australia, then iOS makes more sense. However, if your audience is located in more developing countries in continents like Asia, South America, or Africa, choose Android. Device Data has plenty of data for more specific country breakdowns.

Socioeconomic status: Similarly, if your audience is based in more developed countries, then their presumed wealth indicates that iOS might be more popular. Contrastingly, in less developed economies, users are less likely to pay for apps and prefer in-app advertisements. For this reason, Android might be more profitable. Slate produced an investigative report supporting the economic divide between Android and iOS. 


2. Deployment 

Which operating system is more compatible with your vision? Break down the pros and cons: operating systems and the App/Play Store.

Operating Systems: If you would prefer the ability to have more customization and control, then Android’s operating system is more compatible. If customization and control are not major priorities in your decision, then the iOS, more restrictive language, might be best. On the other hand, if you would prefer to launch faster, develop on iOS—launching is faster here as iOS lacks device fragmentation.

App/Play Store: It is objectively easier to get your app approved on the Google Play Store—the approval process is automated and primarily focused on violations. Approval on the Play Store is a much more lenient process than the App Store. Contrastingly, approval on the App Store may be more difficult.


3. Development

App development, of course, requires access to labor, funding and time. How long do you have to develop this app? And what does your funding look like? This will primarily serve to hire and sustain your engineering talent required to develop the app.

Access to Labor: If you only have access to iOS engineers (and your audience is on iOS), then you will lean towards developing for iOS first. If you have access to Android engineers (and your audience is on Android), then develop for Android first. If you have access to both and sufficient funding, then build for both!

We pulled this graph from Infinum that depicts the hours of work for each project. They calculated that Android development consumes 30% more time, thereby making Android more costly.

Funding: Developing a mobile application can have varying costs depending on the type of app and its features. On average, iOS engineers have a higher hourly rate for development.

Deployment Time: How quickly do you need the app deployed? The Google Play store is more likely to release your app—additionally, they offer Google Play beta App Store for test releases. Contrastingly, the App Store carefully reviews all apps and wait times can be days or weeks long. If you are in a hurry to deploy your app, then Android might be the best option here.

Modern Mobile Development: We believe the future of mobile innovation is multiplatform mobile development. If you’re interested in exploring how Kotlin Multiplatform can help you code once and deploy to Android and iOS, check this out.


4. Revenue Model

Revenue: Your revenue model should reflect your target audience. iOS users are more willing to spend money on their apps, and are more annoyed by in-app advertisements. Contrastingly, Android users are more likely to not spend money on an app and more likely to be okay with in-app advertisements. If you are looking for the most revenue-generating model, iOS definitely wins there. 

We pulled these charts off of App Annie depicting the difference in revenue generation for the App and Play Stores. More Google Play users are downloading apps; contrastingly, more iOS users spend in the App Store than Android users in Google Play. Therefore it makes more sense to use in-app advertisements to generate revenue in Android and charge consumers for in-app purchases in the App Store. 


Examples of Successful Companies Who Chose Android, iOS or Both


Android first: Thrive Global (current Touchlab client) is an app designed to effectively monitor and control mobile use. The app requires a lot of device side controls (not allowed on iOS), which guided the decision to develop on Android first. Check it out here!

iOS first: In the United States, iOS has 65% market share while Android has 35% (according to DeviceData). For this reason, there are generally more iOS engineers available compared to Android engineers and Apple gets more consumer fandom—therefore, successful startups in the past have generally developed iOS first, followed with Android. For example, App Annie states that Airbnb launched their first app in iOS in November 2010. In January 2012, after amassing a strong base and revenue model, they launched Airbnb in Android in January 2012. Another successful app, Instacart, also started this way with iOS in August 2012 and Android in May 2014. Unsurprisingly, Touchlab’s early years were porting iOS apps to Android.

Both (ideal!): If talent and funding are less constraints, developing iOS and Android simultaneously is the best option. One example of a successful app that developed on both platforms is: Crew. They launched in May 2015 and have raised $24.9 Million in VC funding.

Another example of developing for both is DoorDash. DoorDash released on Android and iOS around the same time! App Annie confirms that DoorDash in iOS was launched in October 2013 and DoorDash in Android was launched in December 2013. DoorDash is an excellent example of an app that understood its target audiences because DoorDash is hoping to attract not only clients who will use the app for deliveries, but also couriers who will make the deliveries. For these reasons, one can assume a difference in socioeconomic status and therefore expect to find clients on iOS and labor on Android. DoorDash reacted accordingly and pursued the development of iOS and Android concurrently.

Additionally, sometimes your app does not need to come first because, perhaps, your product is best for the web right now. App Annie shows that Reddit was launched on both iOS and Android on April 7, 2016. Yet, Reddit was founded in June 2005. As a web-based platform, they were able to establish themselves and later develop concurrently for mobile platforms.

As you can see, companies and developers take different approaches to their mobile app development—and no approach is right or wrong. The only way to mitigate risk is to first understand the considerations, factors, and your audience—then, you make a wise and informed decision.


This post was written by Touchlab marketing intern Mina Mahmood. 

Blindsided: The Hidden Cost in Mobile App Development

Blindsided: The Hidden Cost in Mobile App Development

“I just wanted the world to see I was real with it | Wanted a deal, got it, and couldn’t deal with it” – Joe Budden

Previously, I answered the most popular question ever regarding mobile app development: How Much Does It Cost To Build A Mobile App?.  You’ve since read that post, done your homework, found a great service provider (still looking, hit me up), answered all the questions about pre-engagement cost levers, and have a price that’s on budget.

You are good to go, right?


“But if the devil’s in the details, then I’m Satanic” – Drake

Don’t be blindsided by a good deal. (All respect due to Stevie Wonder … he’s a national if not global treasure!)

We only discussed the pre-engagement factors that affect the cost of developing a mobile application. However, there’s a major, hidden, shady A.F. way that costs can increase during the engagement, and it is all in the contractual statement of work.


Typically, engagements will either be of one of two contractual formats:

  • Fixed bid: the product requirements, deliverables, necessary work effort and resultant price are set in the statement of work contract – ostensibly a low-to-no risk engagement
  • Time & materials: given a degree of uncertainty, client pays the contractor for the work effort exhausted toward achieving the statement of work, understanding that the desired deliverable may or may not be achieved within the contracted work effort

One can debate the merits of either format from a budgetary or resource planning perspective, however, in our experience, this is immaterial due to an unethical trick in practice.


Clients inexperienced with contractors may opt for fixed bid given the attractive benefit of a known price – it is an easier internal sale. Some unscrupulous contractors leverage this to win the engagement via a low fixed bid, while also withholding some of the unknowns necessary to complete the deliverables that have not been detailed in the contract.

What then happens is a slippery slope. The engagement kicks off, everything is great, work is getting done, but then, the unknowns are revealed and a work order or change order is presented as it deviates from the contract.

Uh, oh .. your fixed bid price is no longer fixed!

At this point, the client is stuck with the contractor, so there is no competitive bid on the work/change order. Whenever the scope of work creeps, so does the price. Now you have to go back and ask for more budget after you promised fixed costs. Ouch.

“But what I’m doing is not a good look | I never did it by the good book as a lifetime crook” – The Roots

In the world of mobile apps, where the hardware and software technology is constantly being updated, there are a host of unknowns that make fixed bid historically difficult if not impossible to navigate. Repeat after me:

Fixed bid mobile development is an aggressive business tactic that never ends well.


In a time & materials contract, where there is clear definition of price-for-level-of-effort rates, the client understands implicitly and explicitly how scope creep affects the pricing. We find that this clarity aligns both the client and contractor:

  1. We both know directionally where we want to end up.
  2. We both understand that there might be some known unknowns and unknown unknowns along the way.
  3. We agree on how the pricing structure will work in advance of those eventualities.

Though it takes more time to explain the value of time & materials, there’s a beauty in being transparent, resulting in better mutual guidance, fewer legal rotations, happier clients, and repeat business or referrals. 👍🏽


I pride myself on running an ethical mobile advisory and development business, where we reveal every risk, every cost, and every solution path before signature.

If you’re looking for a trusted, ethical mobile services partner, please consider Touchlab. We’re ready to earn your business. My name is Jeff. I help build businesses.