· 5 min read Posted by Kevin Galligan
Kotlin Multiplatform Mobile (KMM/KMP) and Kotlin 1.5.30
Kotlin 1.5.30 officially launched just now. There has been a steady stream of Kotlin releases over the last year or so, each with new features and fixes.
In other words, business as usual.
Since we (Touchlab) are plugging away, day in and out, working on client projects using KMM🐶, it can be easy to lose track of the bigger milestones. Kotlin 1.5.30 feels (numerically speaking) like just another point release, but this is a milestone advance for the platform. 1.5.30 presents a few big features, at least in preview, which will ultimately pave the way for “production ready” mainstream adoption of Kotlin shared code technology.
Kotlin 1.5.30 is a turning point in KMP. Here’s why.
Memory Model Preview
One of the most controversial aspects of Kotlin/Native is the concurrency and memory model. There are a lot of takes on it. Whether it was a good or bad thing, and why, is beyond the scope of today’s post. However, I talked about it much more than anything else over the last few years because for KMM and KMP to grow, people would need to understand it. Still, it’s been a significant blocker for some. That will be changing soon. I would not run the new memory model in production yet 🤞, but you can start experimenting with it now, and expect production deployments to pop up as we settle into 1.6 (I’m guessing 🙂 🚀.
What is this memory model business you ask? Well, in some languages (Java, Swift, etc), multiple threads can read and write shared state. There are a number of ways in which that can be dangerous and error prone. Many languages don’t let you access state from multiple threads (JS), some even have memory restriction rules built into the language and compiler (Rust). Kotlin/Native introduced a relatively unique model. In concept, it’s interesting, but in practice, it can be confusing and a barrier to adoption.
The new memory model will presumably allow unrestricted shared access to state. There are outstanding questions. Will there be equivalents to the JVM concurrency features (synchronized, etc)? If you write a library that needs the new memory model, will you get an error compiling with the freeze model? That kind of thing, but I’m sure these will be resolved in time.
Check out Russell’s quick look at the memory model preview
What will those changes mean to your code, and what should you do today? For the next 6 months at least, you’ll need a working understanding of the current model, and your code probably won’t change at all when you switch on the new model. It’s relaxing the rules more than changing them. My thoughts from mid-2020.
Compile-time Code Analysis
Compiler plugins have been maturing over time, as have the IR compilers for JVM and JS. That maturation isn’t exactly tied to 1.5.30, but KSP support for Kotlin Native and KMP in general is being added, with K/N support added a few weeks ago.
Kotlin Native is quite static at runtime, so any sort of significant code analysis and manipulation needs to be done at compile time. Because there was no real “annotation processor” equivalent, there are a number of libraries that can’t really be built in the same way that they have been for Android. Things like Dagger are only possible with compiler plugins. Any real mocking will need code transformation. Also, generating Swift code to make integration easier needs stable (and reasonably documented) compile-time code analysis. 1.5.30 doesn’t suddenly turn all of these things on, but the maturity of the plugins, the addition of KSP, and the mutliplatform IR compiler becoming the default will greatly accelerate areas of library development that were significantly more difficult with earlier versions.
Hierarchical Modules/Source Sets
I would say this falls under the general header of maturing and stabilizing tools, but very specifically, the Hierarchical MPP. Especially in native, there can be many targets with very similar dependency api’s, but configuring the IDE and compiler to recognize them has not been fully functional. It has worked OK for simpler situations like ios()
, but for libraries with nested hierarchies and many targets, publishing HMPP-enabled libraries hasn’t really been an option.
With Kotlin 1.5.30, that has been (mostly?) fixed! With gentle nudging from @sellmair, we’ve figured out how to correctly configure relatively complex library modules. Over the next few days we’ll be publishing HMPP library versions. Currently published:
SQLDelight HMPP and M1 Mac Arm support is merged and hopefully launching soon. Possibly Koin and others, although I haven’t checked in for a while…
Our logging library, Kermit, is getting HMPP-related changes, but is also going through a fairly significant overhaul focused on performance and features. You can try a preview version 0.3.0-m1. The api itself has changed somewhat, but should be compatible for most uses.
Also…
M1 architectures are a welcome addition. All of the libraries mentioned above have mac arm architecture targets published.
There’s direct support for XCFramework. We’ve spent a fair bit of time helping clients integrate KMM into their production build and CI environments, and demand for Swift Package Manager is significant. Streamlining XCFramework is critical for that. We’ve had some productivity issues using various external solutions and have started experimenting internally. Hopefully direct support will improve that situation.
Touchlab?
The only work we do now that isn’t KMP is for projects we’ve had for years. We are all-in on the tech, and hiring. If you want to jump into KMP with both feet, think about coming to work with us.
I am also interested in talking to Dev Rel people. We’re not quite hiring for that yet, but looking into it. Would love to get some feedback on what we’re thinking. Please DM me if you’d like to chat.
🐶 and KMP. Doing some Kotlin/JS work too!
🤞 We’re going to switch it on for the refreshed Droidcon app. In person events or not, we’ll publish the app anyway, just for fun.
🚀 Some. Not everybody. It’ll take some time to work out issues, but somebody has to go first, right?