Kotlin Xcode Plugin

Kotlin Xcode Plugin

For native mobile developers using Kotlin Multiplatform, the iOS dev experience will be critical. Over the next several months at least, that will be the primary focus of Touchlab’s Kotlin R&D and our collaboration with Square.

Our first release is an Xcode Plugin for Kotlin. Once installed, Xcode will recognize Kotlin files as source files, provide keyword coloring, and most importantly, will let you interactively debug Kotlin code running in the iOS simulator.

 

 

Depending on project setup, that includes regular code as well as Kotlin unit test code. We’ll be providing info on how to run your unit tests in Xcode soon.

Download Plugin Archive Here

 

 

Live Code!

Setting up the plugin and Xcode can be a little tricky. We’ll be doing a live code session Friday 4/26 at 3pm EST. We’ll be setting up a project, demoing debug functionality, helping out some others with setup, and time permitting, checking out App Code as well. If you’d like to watch, sign up here.

If you’re planning on using the Xcode debugger and would like some live help, tick the box on the form above. You’ll need to share your screen, or at least have an open source project we can run. We’ll be picking 1 or 2 projects and attempting to configure them live.

 

App Code?

We are still very much fans of JetBrains tools and look forward to a fully integrated dev experience with AppCode. Check out the blog post about updated support in v2019.1. AppCode has started to ship with an interactive debugger, and we look forward to this product maturing along with Kotlin as a platform.

However, convincing iOS devs to give Kotlin MP a shot is easier with an Xcode option. This plugin is definitely not a full featured tool like IntelliJ IDEA and AppCode, but will allow iOS developers to step through and debug shared code.

Our thinking is that on many teams, iOS focused developers will start out mostly consuming Kotlin. Being able to navigate and debug in their native tools will go a long way towards interest and learning.

 

Xcode Plugin?

A few years back Apple shut down most plugin functionality for Xcode. However, that mostly applies to executable code that may alter output. Although the Kotlin Plugin goes in the “Plugin” folder, it does nothing prohibited by Apple, so requires no special permissions. You can simply copy the language config files and restart Xcode.

Xcode will ask if you’d like to load the bundle on first run. You’ll need to allow that.

Click “Load Bundle”

 

Status

This is definitely a work in progress. There are multiple facets what will be improved upon in the future. However, once configured, this is a very useful tool. Feedback and contributions very much welcomed.

 

Kotlin Source

In order to set breakpoints in Kotlin code, you’ll need the Kotlin source available in Xcode. For those familiar with Intellij and Android Studio, the source won’t simply “appear” because it’s in your project folder. We need to import it into Xcode.

You can import manually, but you should make sure the files aren’t copied out to the app bundle. Xcode “recognizes” Kotlin files, but doesn’t treat them the same as Swift files, for example, by default. You will also need to refresh the Kotlin files periodically as you add/remove code.

As an alternative, we have a Gradle plugin that you can point at a source folder and an Xcode project, and it’ll import Kotlin files. This plugin is very new and some planned features will likely be missed. Kotlin source for dependencies would be the most obvious. We’d love some help with this one.

As another alternative, if using Cocoapods, it’s possible to add source_filesto your podspec, but we’re just starting to experiment with this.

 

Next Steps

We’ll be posting more general updates soon. Subscribe to the mailing list, follow me and/or Touchlab on twitter, or come watch the live stream mentioned above.

Also come hear Justin at The Lead Dev NYC!

Touchlab & Square Collaborating on Kotlin Multiplatform

Touchlab & Square Collaborating on Kotlin Multiplatform

My professional career started in college with Java 1.0. I started working during the bright Write-Once-Run-Anywhere heyday. We were finally going to get one platform!

Well, we know how that went. Java failed as a consumer UI platform. However, as a vehicle of portable logic, Java has been one of the biggest success stories in computers.

Thus one of my favorite quotes:

Shared UI is a history of pain and failure. Shared logic is the history of computers.

-Kevin Galligan (me)

I have been fixated on this for the last few years, because the need is obvious, as seems the opportunity. Native mobile, Android and iOS, are almost identical under the UI. They are architecturally homogenous. The web, although somewhat different, still needs mobile-friendly logic, and as WebAssembly matures, will look increasingly like native mobile at an architectural level.

Kotlin Multiplatform is a great entry into the pool of options for shared logic. It natively interops with the host platform, allowing optional code sharing. The tools are being built by JetBrains, so as the ecosystem matures, we can expect a great developer experience. Kotlin is also very popular with developers, so over the long term, Kotlin Multiplatform adoption is pretty much assured.

I believe in its future enough to move my role at Touchlab to be almost entirely Kotlin Multiplatform R&D. That means I code and speak about open source and Kotlin Multiplatform. As a business, we’ve pivoted Touchlab to be the Kotlin MP shop. We are of course still native mobile product experts, but we are also looking forward to helping clients leverage a shared, mobile-oriented architecture as the future of product development.

Square’s Jesse Wilson recently announced his team’s commitment to Kotlin Multiplatform. We are super excited to get to work with them on improving the developer experience and catalyzing the KMP ecosystem. It would not be an exaggeration to say that this team, and Square more broadly, is responsible for much of what the Android community considers best practice, if not the actual libraries themselves.

To be successful for native mobile, Kotlin Native needs to be something that Swift developers feel is productive and at least somewhat native to their modern language and environment. I think also selling iOS developers on Kotlin as a language and platform is important. This will largely be our focus for the near future.

Will it work out? We’ll see. In as much as it’s possible to make Kotlin native to iOS, I think we have one of the best possible teams to help us find out. I am very much looking forward to the challenge.

 

Learning More about Kotlin Multiplatform 

If you want to learn more about Kotlin Multiplatform here are a couple resources:

– Webinar for evaluating multiplatform development frameworks

– Sign up for our Kotlin Multiplatform newsletter

– Register for a future webinar on Kotlin Multiplatform for iOS Developers

Kotlin Native Stranger Threads Ep 2

Kotlin Native Stranger Threads Ep 2

Episode 2 — Two Rules

This is part 2 of my Kotlin Native threading series. In part 1 we got some test code running and introduced some of the basic concurrency concepts. In part 2 we’re going a bit deeper on those concepts and peeking a bit under the hoot.

Some reminders:

  1. KN means Kotlin Native.
  2. Emoji usually means a footnote 🐾.
  3. Sample code found here:

Look in the nativeTest folder: src/nativeTest/kotlin/sample

Two Rules

K/N defines two basic rules about threads and state:

  1. Live state belongs to one thread
  2. Frozen state can be shared

These rules exist to make concurrency simpler and safer.

Predictable Mutability

Rule #1 is easy to understand. For all state accessible from a single thread, you simply can’t have concurrency issues. That’s why Javascript is single threaded 😤.

If you generally understand thread confinement, the KN runtime enforces an aggressive form on all mutable state. Feel free to skip to “Seriously Immutable”

As an example that you’re likely familiar with, think of the android and iOS UI systems. Have you ever thought about why there’s a main thread? Imagine you’re implementing that system. How would you architect concurrency?

The UI system is either currently rendering, or waiting for the next render. While rendering, you wouldn’t want anybody modifying the UI state. While waiting, you wouldn’t care.

You could enforce some complex synchronization rules, such that all components’ data locks during rendering, and other threads could call into synchronized methods to update data, and wait while rendering is happening. I won’t even get into how complex that would probably be. The UI system really gets nothing out of it, and synchronizing data has significant costs related to memory consistency and compiler optimization.

Rather than letting other threads modify data, the UI system schedules tasks on a single thread. While the UI system is rendering, it is occupying that thread, effectively “locking” that state. When the UI is done, whatever you’ve scheduled to run can modify that state.

To enforce this scheme, UI components check the calling thread when a method is called, and throw an exception if it’s the wrong thread. While manual, it’s a cheap and simple way to make things “thread safe”.

That’s effectively what’s happening here. Mutable state is restricted to one thread. Some differences are:

  1. In the case of UI, you’ve got a “main” thread and “everything else”. In KN, every thread is it’s own context. Mutable state exists wherever it is created or moved to.
  2. UI (and other system) components need explicitly guard for proper thread access. KN bakes this into the compiler and runtime. It’s technically possible to work around this, but the runtime is pretty good at preventing it 🧨.

Maintaining coherent multithreaded code is one of those rabbit holes that keeps delivering fresh horrors the deeper you go. Makes for good reading, though.

Seriously Immutable

Rule #2 is also pretty easy to understand. More generally stated, the rule is immutable state can be shared. If something can’t be changed, there are no concurrency issues, even if multiple threads are looking at it.

That’s great in principle, but how do you verify that a piece of state is immutable? It would be possible to check at runtime that some state is comprised entirely of vals or whatever, but that introduces a number of practical issues. Instead, KN introduces the concept of frozen state. It’s a runtime designation that enforces immutability, at run time, and quickly lets the KN runtime know state can be shared.

As far as the KN runtime is concerned, all non-frozen state is possibly mutable, and restricted to one thread.

Freeze

Freeze is a process specific to KN. There’s a function defined on all objects.

public fun <T> T.freeze(): T

To freeze some state, call that method. The runtime then recursively freezes everything that state touches.

someState.freeze()

Once frozen, the object runtime metadata is modified to reflect it’s new status. When handing state to another thread, the runtime checks that it’s safe to do so. That (basically) means either there are no external references to it and ownership can be given to another thread, or that it’s frozen and can be shared 🤔.

Freezing is a one way process. You can’t un-freeze.

Everything Is Frozen

The freeze process will capture everything in the object graph that’s being referenced by the target object, and recursively apply freeze to everything referenced.

data class MoreState(val someState: SomeState, val a:String)

@Test
fun recursiveFreeze(){
  val moreState = MoreState(SomeState("child"), "me")
  moreState.freeze()
  assertTrue(moreState.someState.isFrozen)
}

In general, freezing state on data objects is pretty simple. Where this can get tricky is with things like lambdas. We’ll talk more about this when we discuss concurrency in the context of application development, but here’s a quick example I give in talks.

val worker = Worker.start()

@Test
fun lambdaFail(){
  var count = 0

  val job: () -> Int = {
    for (i in 0 until 10) {
      count++
    }
    count
  }

  val future = worker.execute(TransferMode.SAFE, { job.freeze() }){
    it()
  }

  assertNull(future.result)
  assertEquals(0, count)
  assertTrue(count.isFrozen)
}

There’s a bit to unpack in the code above. The KN compiler tries to prevent you from getting into trouble, so you have to work pretty hard to get the count var into the lambda and throw it over the wall to execute.

Any function you pass to another thread, just like any other state, needs to follow the two rules. In this case we’re freezing the lambda job before handing it to the worker. The lambda job captures count. Freezing job freezes count ⚛️.

The lambda actually throws an exception, but exceptions aren’t automatically bubbled up. You need to check Future for that, which will become easier soon.

Freeze Fail

Freezing is a process. It can fail. If you want to make sure a particular object is never frozen recursively, you can call ensureNeverFrozen on it. This will tell the runtime to throw an exception if anybody tries to freeze it.

@Test
fun ensureNeverFrozen()
{
  val noFreeze = SomeData("Warm")
  noFreeze.ensureNeverFrozen()
  assertFails {
    noFreeze.freeze()
  }
}

Remember that freezing acts recursively, so if something is getting frozen unintentionally, ensureNeverFrozen can help debug.

Global State

Kotlin lets you define global state. You can have global vars and objects. KN has some special state rules for global state.

var and val defined globally are only available in the main thread, and are mutable 🤯. If you try to access them on another thread, you’ll get an exception.

val globalState = SomeData("Hi")

data class SomeData(val s:String)
//In test class...
@Test
fun globalVal(){
  assertFalse(globalState.isFrozen)
  worker.execute(TransferMode.SAFE,{}){
    assertFails {
      println(globalState.s)
    }
  }.result
}

object definitions are frozen on init by default. It’s a convenient place to put global service objects.

object GlobalObject{
  val someData = SomeData("arst")
}
//In test class...
@Test
fun globalObject(){
  assertTrue(GlobalObject.isFrozen)
  val wval = worker.execute(TransferMode.SAFE, {}){
    GlobalObject
  }.result
  assertSame(GlobalObject, wval)
}

You can override these defaults with annotations ThreadLocal and SharedImmutable. ThreadLocal means every thread gets it’s own copy. SharedImmutable means that state is frozen and shared between threads (the default for global objects).

@ThreadLocal
val thGlobalState = SomeData("Hi")
@Test
fun thGlobalVal(){
  assertFalse(thGlobalState.isFrozen)
  val wval = worker.execute(TransferMode.SAFE,{}){
    thGlobalState.freeze()
  }.result

  assertNotSame(thGlobalState, wval)
}
@SharedImmutable
val sharedGlobalState = SomeData("Hi")
@Test
fun sharedGlobalVal(){
  assertTrue(sharedGlobalState.isFrozen)
  val wval = worker.execute(TransferMode.SAFE,{}){
    sharedGlobalState.freeze()
  }.result

  assertSame(sharedGlobalState, wval)
}

Atomics

Frozen state is mostly immutable. The K/N runtime defines a set of atomic classes that let you modify their contents, yet to the runtime they’re still frozen. There’s AtomicInt and AtomicLong, which let you do simple math, and the more interesting class, AtomicReference.

AtomicReference holds an object reference, that must itself be a frozen object, but which you can change. This isn’t super useful for regular data objects, but can be very handy for things like global service objects (database connections, etc).

It’ll also be useful in the early days as we rethink best practices and architectural patterns. KN’s state model is significantly different compared to the JVM and Swift/ObjC. Best practices won’t emerge overnight. It will take some time to try things, write blog posts, copy ideas from other ecosystems, etc.

Here’s some basic sample code:

data class SomeData(val s:String)
@Test
fun atomicReference(){
  val someData = SomeData("Hello").freeze()
  val ref = AtomicReference(someData)
  assertEquals("Hello", ref.value.s)
}

Once created, you reference the data in the AtomicReference by the ‘value’ property.

Note: The value you’re passing into the AtomicReference must itself be frozen.

@Test
fun notFrozen(){
  val someData = SomeData("Hello")
  assertFails {
    AtomicReference(someData)
  }
}

So far that’s not doing a whole lot. Here’s where it gets interesting.

class Wrapper(someData: SomeData){
  val reference = AtomicReference(someData)
}
@Test
fun swapReference(){
  val initVal = SomeData("First").freeze()
  val wrapper = Wrapper(initVal).freeze()
  assertTrue(wrapper.isFrozen)
  assertEquals(wrapper.reference.value.s, "First")
  wrapper.reference.value = SomeData("Second").freeze()
  assertEquals(wrapper.reference.value.s, "Second")
}

The Wrapper instance is initialized and frozen, then we change the value it references. Again, it’s very important to remember the values going into the AtomicReference need to be themselves frozen, but this allows us to change shared state.

Should You?

The concepts introduced in KN around concurrency are there for safety reasons. On some level, using atomics to share mutable state is working around those rules. I was tempted to abuse them a bit early on, and they can be very useful in certain situations, but try not to go crazy.

AtomicReference is a synchronized structure. Compared to “normal” state, it’s access and modification performance will be slow. That is something to keep in mind when making architecture decisions.

A small side note. According to the docs you should clear out AtomicReference instances because they can leak memory. They won’t always leak memory. As far as I can tell, leaks may happen if the value you’re storing has cyclical references. Otherwise, data should clear fine when your done with it. Also, in general you’re storing global long lived data in AtomicReference, so it’s often hanging out “forever” anyway.

Transferring Data

According to rule #1, mutable state can only exist in one thread, but it can be detached and given to another thread. This allows mutable state to remain mutable.

Detaching state actually removes it from the memory management system and while detached, you can think of it as being in it’s own little limbo world. If for some reason you lose the detached reference before it’s reattached, it’ll just hang out there taking up space 🛰.

Syntactically, detaching is very similar to what we experienced with the Worker producer. That makes sense, because Worker.execute is detaching whatever is returned from producer (assuming it’s not frozen). You need to make sure there are no external references to the data you’re trying to detach. The same syntactically complex situations apply.

You detach state by creating an instance of DetachedObjectGraph, which takes a lambda argument: the producer.

data class SomeData(val s:String)
@Test
fun detach(){
  val ptr = DetachedObjectGraph {SomeData("Hi")}.asCPointer()
  assertEquals("Hi", DetachedObjectGraph<SomeData>(ptr).attach().s)
}

Like Worker’s producer, non-zero reference counts will fail the detach process.

@Test
fun detachFails(){
  val data = SomeData("Nope")
  assertFails { DetachedObjectGraph {data} }
}

The detach process visits each object in the target’s graph when performing the operation. That is something that can be a performance consideration in some cases 🐢.

I personally don’t use detach often directly, although I’ll go over some techniques we use in app architectures in a future post.

TransferMode

We mentioned TransferMode a lot back in the last post. Now that we have some more background, we can dig into it a bit more.

Both Worker.execute and DetachedObjectGraph take a TransferMode parameter. DetachedObjectGraph simply defaults to SAFE, while Worker.execute requires it explicitly. I’m not sure why Worker.execute doesn’t also default. If you want a smart sounding question to ask at KotlinConf next year, there you go.

What do they do? Essentially, UNSAFE let’s you bypass the safety checks on state. I’m not 100% sure why you’d want to do that, but you can.

It’s possible that you’re very sure your data would be safe to share across threads, but the purpose of the KN threading rules is to enable the runtime to verify that safety. Simply passing data with UNSAFE would defeat that purpose, but these’s a far more important reason not to do that.

We mentioned back in post #1 that memory is reference counted. There’s actually a field in the object header that keeps count of how many references exist to an object, and when that number reaches zero, the memory is reclaimed.

That ref count itself is a piece of state. It is subject to the same concurrency rules that any other state is. The runtime assumes that non-frozen state is local to the current thread and, as a result, reference counts don’t need to be atomic. As reference counting happens often and is not avoidable, being able to do local math vs atomic should presumably have a significant performance difference 🚀.

Even if the state you’re sharing is immutable but not frozen, if two threads can see the same non-frozen state you can wind up with memory management race conditions. That means leaks (bad) or trying to reference objects that have been freed (way, way worse). It you want to see that in action, uncomment the following block at the bottom of WorkerTest.kt:

@Test
fun unsafe(){
  for (i in 0 until 1000){
    unsafeLoop()
    println("loop run $i")
  }
}

private fun unsafeLoop() {
  val args = Array(1000) { i ->
    JobArg("arg $i")
  }

  val f = worker.execute(TransferMode.UNSAFE, { args }) {
    it.forEach {
      it
    }
  }

  args.forEach {
    it
  }

  f.result
}

I won’t claim to know everything in life, but I do know you don’t want memory management race conditions.

Also, as hinted with 🤔, the shared type throws a new special case wrinkle into all of this, but that’s out of scope for today, and only used with one class for now.

Maybe This Is All Going Away

Now that we’ve covered what these concurrency rules are, I’d like to address a more existential topic.

I started really digging into KN about a year ago, and I wasn’t a fan of the threading model at first. There’s a lot to learn here, and the more difficult this is to pick up, the less likely we’ll see adoption, right?!

I think having Native and JVM with different models can be confusing, although you do get used to it. Better libraries, better debug output, being able to debug in the IDE at all: these things will help.

There is some indication that these rules may change. There has been little clarification as to what that means. Reading through those comments, and from my community conversations, there is definitely some desire for the Kotlin team to “give up” and have “normal” threading. However, that’s not a universal opinion, it’s pretty unlikely to happen (100% anyway) , and I think it would be bad if it did. If anything, I’d rather see the ability to run the JVM in some sort of compatibility mode, which applies the same rules.

What I think we could all agree on is the uncertainty won’t help adoption. If there’s likely to be significant change to how Native works in the near term, going through the effort of learning this stuff isn’t worth it. I’m really hoping for some clarification soon.

Up Next

We’ve covered the basics of state and threading in KN. That’s important to understand, but presumably you’re looking to share logic using multiplatform. JVM and JS have different state rules. Future posts will discuss how we create shared code while respecting these different systems, and how we implement concurrency in a shared context. TL;DR we’re making apps!


🐾 I’m prone to tangents. Footnotes let you decide to go off with me.

😤 Yes, workers. All the state in the worker is still local to a single thread, and the vast majority of JS simply lives in the single thread world.

🧨 In more recent versions of KN, the runtime seems to catch instances of code accessing state on the wrong thread due to native OS interop code, but I’m not sure how comprehensive that is. In earlier versions it didn’t really check much, and it was pretty easy to blow up your code. If you want to experience that for yourself, try passing some non-frozen state around with TransferMode.UNSAFE.

🤔 There’s a recent addition to the runtime that defines another state called shared. It’s effectively allowing shared non-frozen state. You can’t call it and create shared objects 🔓. It’s an internal method. Currently you can create a shared byte array using MutableData, and that’s about it. I’d be surprised if that’s the only thing “share” is used for. We’ll see.

🔓 That’s not 100% true. I’m pretty sure you could call it, but I’m not saying how 🙂

⚛️ You can make this example work with an atomic val, although I’m guessing you wouldn’t actually write it this way. But…

@Test
fun atomicCount(){
  val count = AtomicInt(0)

  val job: () -> Int = {
    for (i in 0 until 10) {
      count.increment()
    }
    count.value
  }

  val future = worker.execute(TransferMode.SAFE, { job.freeze() }){
    it()
  }

  assertEquals(10, future.result)
  assertEquals(10, count.value)
  assertTrue(count.isFrozen)
}

🤯 This wasn’t true early on. Each worker would get it’s own copy on init. That’s a problem if you do something like define a global worker val (which gets created on init, and defines a worker val, which gets created on init…).

🛰 I’ve made a horrible Star Trek analogy. It’s like beaming. It can live on one side or the other, but can do nothing in transit. If it results in the same entity in two places you have a big problem, and it can be lost in transit forever.

🚀 I’m definitely not suggesting you do any premature optimization, but if you were constructing some performance critical code and had a big pile of state hanging around, it’s maybe something to consider. Non-frozen state wouldn’t have the same accounting cost. Now forget you read this.

Kotlin Native Stranger Threads Ep 1

Kotlin Native Stranger Threads Ep 1

Episode 1 — Worker

My original post about Kotlin Native (KN) concurrency was written a while ago, with a much earlier version of Native and Multiplatform. Now that Kotlin Multiplatform is ready for production development, it’s time to revisit how Native concurrency works and how to use it in your application development.

Concurrency and state in KN is significantly different compared to what you’re likely used to. Languages like Java, Swift, Objective-C, and C++ give the developer tools to ensure proper concurrent state access, but using them properly is up to the developer. Writing concurrent code in these languages can be difficult and error prone. KN, by contrast, introduces constraints that allow the runtime to verify that concurrent access is safe, while also providing for reasonable flexibility. It is trying to find a balance between safety and access. What that means is changing, and even within Jetbrains there appear to be conflicting visions. What is clear, however, is that Jetbrains is committed to Saner Concurrency, and to building a platform for the future.

In this series we’ll cover the rules and structures of KN’s concurrency and state model, and how they apply in the context of application development.

Just FYI, if you see emoji in the doc, that’s generally a footnote with unnecessary info 😛.

Episode 1 — Workers ⏮

Kotlin Native (KN) concurrency is kind of a big topic. For developers familiar with Java and Swift/ObjC concurrency, there are several new concepts to learn, which presents a problem out of the gate. Where to start?

In general, I like to be able to play with the code right away, so we’ll start with a core KN concurrency mechanism: Worker. We’ll encounter some concepts before we’ve had a chance to explain them, but we’ll sort that out later on in the series.

The code samples in this post can be found here. You’ll need to have a MacOS machine to run them. Adding other platforms should be pretty simple, if anybody wants to give it a shot.

Most of the examples are implemented as unit tests 🔍. You can run them by typing:

./gradlew build

Worker

KN supports concurrency out of the box using a structure called “Worker”. A Worker is a job queue on which you can schedule jobs to run on a different thread.

The Worker related tests can be found here.

Creating a worker is relatively straightforward.

import kotlin.native.concurrent.Worker
class TestWorker {    
  val worker = Worker.start()
}

Each Worker instance gets a thread 📄. You can schedule jobs to be run on the worker’s thread.

worker.execute(TransferMode.SAFE, {"Hello"}) {
    //Do something on Worker thread
}

There are a few things to take note of in that call. Here’s the function definition for execute:

fun <T1, T2> execute(
        mode: TransferMode,
        producer: () -> T1,
        job: (T1) -> T2): Future<T2>

We’ll discuss TransferMode in part 2. In summary, there are two options: SAFE and UNSAFE. Just assume it’s always TransferMode.SAFE.

The producer parameter is a lambda that returns the input to the background job (generic type T1). That’s how you pass data to your background task.

It’s critically important to understand that whatever gets returned from the producer lambda is intended to be passed to another thread, and as a result, must follow KN state and concurrency rules. That means it either needs to be frozen, or needs to be fully detachable. In theory, being detachable is simple, but in practice it can be tricky. We’ll talk about that in a bit.

The job parameter is the work you intend to do on the background thread. It will take the result of the producer (T1) as a parameter and return a result (T2) that will be available from the Future.

Well discuss this more later on, but it’s a super important topic and can bear some repetition. It is very easy to accidentally capture outside state in the job lambda. This is not allowed and the compiler will complain. You’ll need to be extra careful to avoid doing that.

Execute’s return is ‘Future<T2>’. Your calling thread can block and wait for this value, but in an interactive application we’ll need a way back to the calling context that doesn’t interrupt the ui.

producer

The producer’s job is very simple. Isolate a parameter value to hand off to the background job. You’ll see the producer lambda both here and when we need to detach an object from the object graph. It’s a little confusing at first, but understanding what’s happening with the producer will help clear up KN’s broader concurrency concepts.

Take note of the fact that the producer is a lambda and not just a value. It doesn’t look like this.

worker.execute(TransferMode.SAFE, "Hello"){
    //Do something
}

That is (presumably) to make isolating and detaching the object reference easier.

The producer is run in whatever thread you’re calling it from.The result of that lambda is then checked to make sure it can be safely given to the worker’s thread. However, to be clear, all of that activity happens in your current thread. We only engage the worker’s thread when we get to the background job.

Haven’t left the calling thread yet

How do we determine that some state can be safely given to another thread? We have to respect KN’s two basic rules:

    1. Live state belongs to one thread
  1. Frozen state can be shared

Part two is all about the two rules, but in summary:

    1. Live state is the state you’re used to writing
  1. Frozen is, basically, super-immutable. You create frozen state by calling ‘freeze’ on it

Note: We’ll start using data classes rather than String. Strings, as well as other basic value types, are frozen automatically by the runtime.

Here’s a basic example:

data class JobArg(val a: String)
@Test
fun simpleProducer() {
  worker.execute(TransferMode.SAFE, { JobArg("Hi") }) {
    println(it)
  }
}

We create an instance of JobArg inside the producer. There are no external references (nobody has a reference to that instance of JobArg), so the runtime can safely detach and pass the state to the job lambda to be run in another thread.

This, by contrast, fails.

@Test
fun frameReferenceFails() {
  val valArg = JobArg("Hi")
  assertFails {
    worker.execute(TransferMode.SAFE, { valArg }) {
      println(it)
    }
  }
}

When we call execute, valArg is being referenced locally, so the attempt to detach will fail.

This looks like a way to hide the reference, but also fails:

class ArgHolder(var arg:JobArg?){
  fun getAndClear():JobArg{
    val temp = arg!!
    arg = null
    return temp
  }
}
@Test
fun stillVisible() {
  val holder = ArgHolder(JobArg("Hi"))
  assertFails {
    worker.execute(TransferMode.SAFE, { holder.getAndClear() }) {
      println(it)
    }
  }
}

Why? Well, this gets a bit into the weeds of how KN’s memory model works. Native doesn’t use a garbage collector 🚮. It uses reference counting. Each allocated object has a count of how many other entities have a reference to it. When that count goes to zero, that memory is freed.

iOS developers will have an easier time with this concept, as this is how Swift and ObjC work 🍎.

References to objects obviously include hard field references, but also include local frame references. That’s what’s wrong with the block above. The JobArg appears in the local frame context, however briefly, which still has a reference to it when the producer attempts to detach it.

Outside context has a local reference

This, however, will work:

fun makeInstance() = ArgHolder(JobArg("Hi"))
@Test
fun canDetach() {
  val holder = makeInstance()
  worker.execute(TransferMode.SAFE, { holder.getAndClear() }) {
    println(it)
  }
}

The local ref is cleared in ‘makeInstance’. So again, if you’re wondering why the producer is a lambda, it’s to make avoiding local references easier. Look at simpleProducer again:

@Test
fun simpleProducer() {
  worker.execute(TransferMode.SAFE, { JobArg("Hi") }) {
    println(it)
  }
}

Much simpler.

Confused?

Passing live data is difficult syntactically. In fact, we don’t have multithreaded coroutines yet because JB still needs to reconcile the two systems 😟. I gave you some pretty weird examples out of the gate on purpose. KN makes passing mutable state between threads difficult, and in general that’s a good thing, because it’s risky. When I need to pass something into a worker I’ll almost always freeze it.

@Test
fun frozenFtw() {
  val valArg = JobArg("Hi").freeze()
  worker.execute(TransferMode.SAFE, { valArg }) {
    println(it)
  }
}

Because frozen data can be shared between threads, the producer can return valArg. This is obviously a simple example, but as you get into Native development, you’ll generally find freezing data to be simpler, and in general, data that you’re passing around should be immutable anyway.

I should mention that you can bypass all of this and simply pass data unsafe with TransferMode.UNSAFE, and it’ll probably work most of the time. Don’t do it, though. It’s called UNSAFE for a reason, so if you can’t clearly explain why you would use it, you never should. We’ll discuss this in detail in part 2.

We spent a lot of time on the producer, but again, the producer introduces a lot of core, and potentially confusing topics. If you can fully grasp what’s going on with that you’ll have covered a lot of ground.

Background Job

What happens with the background lambda, compared to what was happening with the producer, is much simpler. The lambda takes a single parameter, which is the result of the producer (which, btw, can be empty). If the background job returns a value, it’ll be available from the Future.

@Test
fun backgroundStuff() {
  val future = worker.execute(TransferMode.SAFE, { 1_000_000 }) {
    var count = 0
    for(i in 1..it){
      //Do some long stuff
      count++
    }
    count
  }
  assertEquals(1_000_000, future.result)
}

Here we’re going to loop and count. We pass the number of loops in the producer.

Just FYI, be careful with threads and unit tests 🍌. The ‘future.result’ forces the thread to wait for the background lambda to finish.

Until now, everything happened in the original calling context. The background job finally gets us into the second thread.

Since job is in a different thread, you can’t reference just any state. Only the lambda param of type T1, originally from our friend producer, and global state known to be frozen or thread local. In other words, only state that the KN runtime can verify is safe to access.

As mentioned previously, it’s pretty easy to capture other state in the lambda of your background task. The compiler attempts to prevent this, but only when you’re calling the worker method directly. We’ll dive deeper into that when we talk about actually implementing concurrency in your applications.

In simple examples, capturing extra state won’t be much of a problem. Where this quickly becomes problematic is capturing state when you call background tasks from your application objects. I found this difficult at first, but you get used to it. Frameworks help, and especially when multithreaded coroutines become available, running tasks in the background will be simpler 😴.

Future

The ‘execute’ method returns a Future instance, which can be used to check the status of the background process, as well as get the value returned. The value can be Unit, which means you’ll simply verify that the process completed.

If it’s OK to block the calling thread, the simplest way to get your result is to call the result property on the Future instance. That’s what we’re doing in the test examples.

Alternatively you can poll status on the Future, or set up a result Worker to call back to. However, if you’re intending to use Worker in the context of a mobile application, going “back to the main thread” is somewhat more complex. We’ll discuss that later.

Lifecycle

We don’t worry about it too much in the context of our test samples, but you should shut down Workers when you’re done with them. This is only necessary if you’re going to keep the process running but abandon the Worker. It your Worker instances are meant to live along with your process, you can leave them hanging around (they get shut down with the process).

@Test
fun requestTermination(){
  val w = Worker.start()
  w.requestTermination().result
}

requestTermination returns a Future. If you need to wait for termination, check the result.

You Probably Won’t Use Worker

In the same way you probably don’t create a Thread instance or an ExecutorService very often in Java, libraries will probably keep you away from creating Worker instances directly. Unless KN state rules radically change, however, you won’t get away from those. You will, however, be seeing Worker a lot for the next few posts at least.

Up Next

Worker introduces us to the basics of running concurrent code on Native. Part 2 will go deeper into the why of KN state rules, freezing, detaching, and some more detail about what’s happening under the hood.


😛 But super interesting info!!!

OK. It’s not exactly Episode 1. The earlier post, from about 8 months ago, was supposed to be the start of the series, but things were changing really fast and I got more involved in library development. Yada yada, we’ll call that the pilot and this is the start ot the series.

🔍 The test code is configured with a common source set and a native source set. To get native code tests to run on the command line, the simplest way to do that is to build a macos target. The build process automatically builds and runs a command line executable. JVM is currently disabled because we’re not talking about the JVM 🙂

📄 The docs are pretty clear that you shouldn’t rely on that in the future as it may change, but for the foreseeable future, 1 Worker gets one thread.

🚮 That’s mostly true. There is a garbage collector in the runtime, but I’m pretty sure that’s there to deal with reference cycles. Memory is primarily managed by reference counting.

🍎 There are some important differences to note. KN can deal with reference cycles, so “weak” references aren’t a concern. Also, to be clear, KN objects are ref counted, and it’s conceptually similar to ARC, but it’s a separate system. While running on iOS, KN doesn’t use ARC for it’s ref counts.

😟 A fair number of people have expressed their hope that JB abandons the “Saner Concurrency” effort. The comment in that coroutines issue implies they might, or at least relax the rules somewhat. While I understand this stuff can be confusing, the ultimate goal is to produce a better platform. I would very much like some improved debug info from immutability related exceptions, and some improved library support, but once you get your head around this stuff it’s not that bad.

🍌 Calling for the future result forces the main thread to wait. That’s why this test works correctly. This can all get very tricky when trying to interact with the main thread, etc. There are frameworks and examples in more mature ecosystems to help out, but KN and multiplatform are in early days. Just an FYI.

😴 I’ve been asked if there’s any reason to learn this crazy threading stuff if the coroutines API will largely hide the details. Although we don’t know yet what changes, if any, will happen to the KN concurrency and state model to accomodate coroutines, unless Jetbrains radically changes their plan and abandons everything, you’ll definitely need to understand this stuff.

Stately, a Kotlin Multiplatform Library

Stately, a Kotlin Multiplatform Library

This started as a monster single post, now split in 2. Part 1, Saner Concurrency, is about what Kotlin is doing with concurrency.

During my talk at KotlinConf, I promised a part 2 of Stranger Threads to better explain threading and state in Kotlin/Native. I built a library instead.

Update!!! Video and Slides from my Droidcon UK talk!

What is Stately

Stately is a collection of structures and utilities designed to facilitate Kotlin/Native and multiplatform concurrency. As of today, it is a set of expect/actual definitions that most apps will wind up needing, and a set of frozen, sharable collection classes that allow you to maintain mutable collections across threads.

Why does it exist?

Kotlin/Native, and hopefully soon, all of Kotlin, will be implementing Saner Concurrency. Native, today, has runtime rules, and notably a lack of “standard” concurrency primitives, that help ensure runtime concurrency sanity. The 2 basic rules are:

  1. All mutable state is in one thread. It can be transferred, but is “owned” by one thread at a time
  2. All shared state is immutable (frozen)

These are good rules, because they will make concurrent code safer.

However, there are times where being able to access shared, mutable state is pretty useful. That may change as we adapt architectural thinking, but at least for today, there have been a few tricky situations to arise without that available. Global service objects and caches, for example.

Kotlin/Native has a set of Atomic classes that allow you to mutate state inside of an immutable (frozen) object. The Stately collections are constructed with atomics.

For the most part, I’d expect Stately to be used sparingly, if at all, but it’s one of those things that you’ll really miss if you need it and don’t have it.

What you shouldn’t use it for

Kotlin/Native has rules that help encourage safer concurrency. Changing frozen state is, on some level, enabling manually managed concurrency. There are practical reasons to have these collections available, but if you’re using them a lot, it might be better to try for some architectural changes.

If you’re new to Native and running into mutability exceptions, you might be tempted to make everything atomic. This is equivalent to marking everything synchronized in Java.

It’s important to understand how Native’s threading and state work, and why you need a concurrent collection. But, you know, no judgements.

Basic Principles

The collections mostly act like their mutable counterparts. You designate generic types, and get/set/remove data entries. One key thing to note.

Anything you put into the collection will get frozen.

That is very important to understand. The collection itself is “mutable” in the sense that you can add and remove values, but the collection and values it holds are all frozen.

For data objects, this is generally OK, but for callbacks, this may be somewhat mind bending to understand. That’s not a Stately problem so much as a Kotlin/Native problem.

I’ve talked to several people who struggle with this reality. All I can say is it seems weird at first, but is not as big of a deal as you think. You just really need to be aware of what that means.

For example, in the Droidcon app, we’re using LiveData to build a reactive architecture. To avoid freezing everything in the Presenter/UI layer, we simply keep all of our callbacks thread local to the UI thread, which means they don’t need to be frozen. The lower level callbacks all exist in Sqldelight, and are frozen, but they exist just to push data back to the main thread and don’t capture UI-layer state.

Summary, it’s different, but not that bad. The details would turn this blog post back into a monster, so I’ll just push out my promised “Stranger Threads” deadline out by a couple weeks.

A lot of how we think about architecture will change as coroutines mature on Native, and as the community has some time to think about the implications. I suspect there will be less use for shared collections as that happens, but for today, they’re pretty useful.

Available Collections

CopyOnWriteList

Similar to Java’s CopyOnWriteArrayList. In fact, the JVM implementation is Java’s CopyOnWriteArrayList. Useful in cases with infrequent changes and critical read stability. Registering callback listeners, for example. If you’re changing the list often, or it’s large, each edit requires a full copy. Something to keep in mind.

SharedLinkedList

This is a mutable list that will have reasonable performance. All edits lock, just fyi, but individual edits themselves are pretty small, so locks are quick. You can hold onto a node reference that will allow you to swap or remove values without traversing the list, which is important in some cases.

There are 2 basic flavors. One has unstable but performant iterations, the other I’d call CopyOnIterateList. It’s similar to CopyOnWriteList, except the copy happens when you call iterate. This may prove more generally useful than COWAL, as it should handle frequent changes better, but if you’re editing often and iterating often, remember you’re dealing with an aggressively locking data structure.

The other flavor will let you iterate without copying or locks, but edits to the list will be reflected while you’re iterating. The edits are atomic, so you won’t get errors, but you’ll also wind up with potentially fuzzy iterations. If that isn’t an issue, this should be a better choice.

I am aware that there are lockless implementations of linked list, but I didn’t attempt one. Going for simple.

SharedHashMap

Implements a pretty basic version of our favorite structure HashMap. Performance should have similar characteristics to what we’re used to from Java🌶, although obviously locking will produce different absolute numbers. In short, same big O.

I probably don’t need to go into where a hash map would be useful, but in the service object context, it’s probably caches, which leads to…

SharedLruCache

Shared version of a least recently used cache. If you’re not familiar, there’s a cap on the number of values, and the “oldest” get bounced first. “Oldest” being defined as the least recently accessed.

Status

This is pretty new. As Kotlin 1.3 matures, we should have a more stable deployment, and may add some other features. I’ll try to add some issues for “help-wanted” in case anybody wants to contribute.

Also, as of today (Friday 10/26) the JS code has an implementation but tests need to be wired in. That means don’t use the JS until that happens.

Notes
The only supported Native implementation is for mac and iOS. Other Native targets should work, except we’ll need to find a lock implementation. Pthread_mutex is fine, except you need to destroy it explicitly, and K/N has no destructor. That means a ‘close’ method on the collection, which I’d rather avoid. Right now there’s a spin lock, but not sure if that’s a great idea.

The collection implementations could be described as minimal. My main goal was to start replacing some of the custom C++ code we’d put into earlier K/N implementations, which generally existed because of a need for shared state. If there’s a real need for something else, open an issue to discuss.

🌶 Similar to Java 7 and below. That is, if your bucket size doesn’t increase, as your entry size increases, or you have a bad hash, you start to approach N, because the bucket list means a lot of scanning. In Java 8, after the list gets to size 8(ish), it’ll store those values as a tree, so worst case gets capped. Our hash map resizes like Java’s, so you should only find yourself in “worst case” if your hash is bad, or if you have horrible settings on your map. You probably don’t need to know this, but Java 8’s optimization is kind of cool, and I didn’t bother trying to implement it 🙂