JVM Advent

The JVM Programming Advent Calendar

20 Years Of Java…

Twenty years ago in a Zurich apartment two momentous things happened.

My daughter took her first steps and a young post doctoral researcher (her dad) took his first steps with Java.  It is really hard to fully understand what Java was back then.  These where the days in which TCL was all the rage and Java had some slightly strange relationship with fridges and toasters.  There was no obvious use for Java but then somehow it was gaining momentum like a steam train on a steep down gradient.

What first attracted me to the language was actually applets; the idea of having a real time 3D salivation of molecular structures embedded in one of these “new and all the rage” web pages seemed quite intoxicating. Whilst, simultaneously, to a Fortran and C programmer, Java seemed an unimaginably clunky and inelegant language.

Over the following 20 years I have never spent more than a few months away from Java.  It has transformed the world of computing and been partly responsible for breaking the monopolistic hold on IT which Microsoft so keenly relished in its heyday.  Java has become much more powerful, unimaginably faster, infinitely more scalable and remarkably more clunky whilst simultaneously, horrifically less and substantially more elegant (varhandles, autoboxing – yin and yang).

In this post I wish to give a very personal review of Java’s evolution over these two decades highlighting some of the good and some of the bad and a sprinkling of the remarkably ugly. This will be done with affection and hopefully shed some light on where Java is going and what dangers lie ahead for it. I leave futurology to the next post.

How Important Is Java?

Let’s not be squeamish about this; Java is one of only 4 truly paradigm shifting commercially relevant general purpose programming languages – ever. Fortran, COBOL, C and Java. We can all have our favorite languages and spout how Python is more important than COBOL in the history of computing or that C# is better than Java and so more important. However, neither Python nor C# shifted any paradigms (C# is and always has been just an incremental re-imagining of Java and Python is actually a long distant descendant of awk).  SQL is not a general purpose language and Lisp has never been commercially relevant (roll on the haters – but there it is).

An aside for C++ to explain why it is not in my list: Simply put, C++ was not a big enough factor soon enough before Java hit the scene.  People did not shift is hordes from COBOL to C++.  Whilst it is an important language, its paradigm shifting world view altering impact has been much less than Java.

Java’s Similarity With Dr Who

Java has not been a power house of continuous success but it sure has been a powerhouse of success; we might like to  believe its progress has been focused and planned whilst turning a blind eye to the utter failure of some main stream Java developments and the staggering successes derived from “voices off”.

Each time Java and the JVM seemed on the brink of annihilation by some nemesis (C#, Go, Ruby etc) a regeneration has occurred resulting in another series of exciting episodes.  Even hideous wounds such as the JNI interface or traumatising terrible parallel executor streaming mess thingy have not been enough to kill our hero. Similarly, remarkable performance enhancements such as the hotspot VM and a huge range of compiler optimisation tricks introduced in Java 7, 8 and 9 have continuously kept Java relevant in a world where CPU clock speeds have stalled and post crash IT budgets are hungry for cost savings.

Escape analysis has heledp Java escape cost analysis?  (OK, that one is too much Alex, back off with the whit.)

Although the natural tendency of a retrospective is to follow time’s arrow I found remarkable challenges in doing so for Java. Alongside those other most commercially important of languages C, Fortran and COBOL, Java’s history is as multi-threaded as its runtime and recursive as external forces have bent Java and Java has similarly reshaped the world of IT.

To illustrate this point we can look at JEE and Hadoop.

The Elephant And the Fish

Around the turn of the century programming went a bit mad. Something which should have been really simple, like serving a web page, suddenly required (what felt like) pages of XML and screeds of Java code just to define a ‘Servlet’.  This servlet would further be supported inside a ‘application server’ which had yet more XML defining Java beans which swam in a sea of config and services.

Some readers might find my personal view distasteful and feel that J2EE (now rebadged JEE) was/is just amazingly brilliant.  It was in some ways because it showed how a new, modern programming language could finally break the stranglehold of the Mainframe on commercial scale business computing.  The well defined pieces of J2EE (or pieces used by it) like JDBC and JMS were really amazing.  Suddenly we had good chunky business processing tools like database connectivity and inter-system messaging.  Java looked like it really could reshape everything from banking to warehouse management onto a distributed computing environment.

The snag was the implementation of Java Enterprise Edition was terrible in almost every way.  I say this from personal experience not from a theoretical point of view.  Back in the very early 2000s I was a J2EE developer.

The story was something like this: “Everything is too slow. The end.”.

To be more gracious I will give a little more detail.  I worked for a company which created software for the retail industry. Their solutions were originally all in C and worked with Oracle relational databases.  Moving to J2EE was a huge bet on their part and required a substantial investment in retraining and other resources (they went bankrupt).  One of the customers for this new range of Java based software was a nascent (and still running many years later) Internet grocer. Their system consisted of big (by the standards of the time) 16 CPU Sun servers.

The overhead of the J2EE system with its clunky state management where some beans were to persist data to the database over JDBC and others managed logic etc killed performance.  Even with the ‘local’ and ‘remote’ interface ideas which came in with later versions of J2EE, heavy reliance on JNDI for looking beans up and then serialisation for communicating between them was crippling.

The system further relied on JMS which was catastrophic in Weblogic at the time (version 5 if I remember correctly).  Indeed, the Weblogic JMS implementation we started out with serialised the messages to Oracle using blob types which Oracle 8i was unable the manage inside transactions. Yes really, JMS message persistence was non transactional but they still asked for money for this garbage.

So, I spend 6 months of my life ripping the business logic code out of J2EE and implementing them in what we would now call POJOS (plain of Java objects). I went further and replaced JMS with a PL/SQL based messaging system which was accessed from Java using the PL/SQL to Java bindings.  All this worked well and many, many times faster than the J2EE system.

Then a friend and co-worker of mine rewrote the whole thing in PL/SQL and that was even faster still.

You might not be surprised that this poisoned my view of J2EE from then on.  Its basic failures were an obsession with cripplingly complex and slow abstractions and the very concept of an application server. Neither of these are actually required.

Just when the crushing weight of JEE seemed to spell a long slow death for large scale business Java, Google blew the world up with its famous papers on GFS, Map-Reduce and BigTable.  The Google File System and the systems which ran ontop of it ushered in a new way of thinking about processing.  The ’embodied’ programming model of a computer running a server which then ran processes went away.  Further, the whole approach was somewhat low concept; run simple things in big redundant ‘clouds’ of compute resource.  However, what those ‘things’ were was much less prescriptive than the tightly interfaced and abstracted world of JEE.

Rather than succumb to this new nemesis our ‘voices off’ allowed Java to regenerate into an entirely new beast.  Hadoop was born and rather than the cloud being the death of Java in the enterprise it has embedded Java in that enterprise for the foreseeable future.

Phones Are The New Fridges

Bringing platform independence into developer consciousness is one thing for which I believe we all owe a huge debt of gratitude to Java. Viewing software development as  largely independent of OS vendor hype revolutionised higher level systems architectural thinking. That one could write something on Windows and run it on Linux (or Solaris or Irix or what ever) was just mind melting back in the late 90s.

I personally believe the combination of Java’s platform independence and the rugged simplicity of Hadoop are the two forces most responsible for preventing Microsoft ‘taking over the world’ with .Net.

Where does this platform independence come from?  What was the underlying purpose for it back in the day?  Well, we can rewrite history and say different things post-hock.  Nevertheless, I clearly remember Sun saying it was all to do with fridges and toasters. Somehow they were completely convinced that automated appliances were the future (right) and that Java would be the way to write one appliance management program and run it everywhere (wrong).

Getting that second part wrong is hardly a major failure; there was no way that Sun could have predicted super low cost CPUs running a stable open source operating system would prove to be the abstraction of choice over a virtual machine.  Linux has completely upended the world by providing platform independence at the OS level and by being free.  However, that is another story and not the story of Java; instead along came Android.

A lot of business Java developers do not really think about the impact of Android because it does not run the JVM. Nevertheless, it does run Java.  Things are shifting a bit more now (as far as I can tell), but back even 5 or 6 years ago the standard way to develop an Android App was to write it in Java on a PC using an Android emulator, compile it down to byte code and then cross translate the JVM bite code to Dalvik byte code.

Indeed, this process was so awesomely doable that back when I worked with Microfocus we compiled COBOL to JVM byte code and then translated that to Dalvik and then ran a COBOL app on an android phone.  I am not saying that was a good thing to do, but it sure was fun.

My point being that Android (and to a lesser extent Java feature phones before then) made Java relevant to a huge community of up and coming developers.  I suspect Universities teach Java and not C# right now because of Android.  Yet again, “Voices off’ saved Java and allowed it to regenerate into a new Doctor to take on new challenges in a great and exciting new series (actually – I don’t watch Dr Who – I did back in the 70s and 80s though; I sort of lost interest when Lalla Ward and Tom Baker left the series).

It is with some wry amusement that I look back on discussions as to if ‘Android is proper Java’ and some feelings of hostility between Google and Oracle; it is unarguably a fact that Google taking on Dalvik and Java as the platform for Android massively enhanced the value of the Java asset Oracle came to own.

Simplicity And Elegance – JMM

Java is rarely seen as trail blazing simplicity and elegance yet in one regard it really has shown other mainstream languages the way forward.  Introducting of the new Java memory model as part of the Java 5 standard was a triumph of simplicity and effectiveness.

Let’s get serious about how big this was; for the first time one of the big commercial programming languages laid out in clear terms all the ‘happens-before’ relationships of the language in a multi-threaded environment.  Gone were all the concerns about edge cases; all the missing optimisations due to trying to maintain similarity between behaviors which were never originally specified. Suddenly, Java became the ‘go to language’ for developing lock free and wait free algorithms.  Academic papers on lings like skip list implementation could be based in Java.  Further, the model then permeated out to any other language which was based on the JVM.

Other JVM languages is not the limit of its impact; to quote Wikipedia:

“The Java memory model was the first attempt to provide a comprehensive memory model for a popular programming language.[5] It was justified by the increasing prevalence of concurrent and parallel systems, and the need to provide tools and technologies with clear semantics for such systems. Since then, the need for a memory model has been more widely accepted, with similar semantics being provided for languages such as C++.[6]

So, yes Java taught C++ how to do memory modelling and I felt the impact both with Java 5 and then with C++ 11.

Unsafe But Required For Any Speed

Java’s fatal flaw, ever since hotspot finally put compilation/interpretation to bed, has been and might well always be its resource allocation model.  Java (like many other languages – Python for example) treat memory as a completely different resource to anything else.  Consider C in which memory is allocated via malloc which returns a pointer to that memory; this resource is freed by making a call to free.  Files in C are generally opened by fopen and closed by fclose.  In other words, the use of memory and file resources in C are symmetrical.  C++ goes further in having scope based resource management (RAII – even Stroustrup admits that is a terrible name) which allows symmetrical treatment of memory resource (new/delete) and other resources (files, sockets, database connections, etc) in the same way and often completely automatically.

For some reason which is not clear to me it became considered a good idea in the 90s to develop programming languages which treat the resource of memory completely differently to all other resources.  From a CPU point of view this does not really make a lot of sense.  Main memory is connected through a chip set to the CPU as is the hard drive and the network cards.  Why is memory somehow very different to these other two?

Indeed, what we have seen over the last 20 years is main memory become more and more like all other resources as memory latency compared to CPU speed has become a larger and larger issue.  In modern NUMA architectures, reaching across the mother board to a separate memory bank can take tens of clock cycles.  Further, running out of memory is much more fatal then other resource issues.  Memory is more precious than network connections for example.  If a socket gets dropped the program can try to reestablish it in a loop; if an out of memory error occurs the program is doomed.  Indeed, it might not even be able to log that the error occurred.

Alongside the asymmetry of resource management Java also has really poor at IPC and internal inter thread communication (less so now – see later).  You might be shouting at the screen right now saying ‘But Java has excellent library support for inter thread communication and handles sockets for IPC’.  Whilst that is true the world moved on; suffering a context switch to pass data from one thread to another or from one process to another is no longer acceptable. The wide adoption of memory fence based queuing and shared memory started to make Java look clunky and slow against C and C++. Especially with C++11 adoption, Java’s abilities looked dire.

But, as is so often the case, the community found ways around this. Lurking in the JDK’s guts was (still is to be clear) this class called sun.misc.unsafe.  In Java 8 it was even substantially improved and expended.  It turns out that the JDK developers needed more low level access to the computer hardware than public JDK classes provided so they kept adding stuff to this dark secret.

Back when I worked for Morgan Stanley I was involved with a project to get C++ low latency systems to ‘talk’ to Java over shared memory.  To ensure that the approach to atomics on Intel x86 was the same for the C++11 standard and sun.misc.unsafe I went through the open JDK native  code. Indeed, whilst some of the sun.misc.unsafe operations were a little sub-optimal (looping on CAS for an atomic write rather than using a a lock prefixed move for example) the approach of fence on write and rely in ordered reads matched 1:1 with C++11.

Because sun.misc.unsafe methods are intrinsic their performance is fantastic, especially with later JVMs.  JNI calls are a safe point which prevents the optimiser inlining them or unrolling loops containing them (to a greater or lesser extent).  With intrinsics, the optimiser can reason about them as though they were any other Java methods.  I have seen the optmiser remove several layers of method calls via inlining and the unroll an outer loop so that sun.misc.unnsafe.setLong() reached the same speed we would see in a profile guided optimisation C program. Frankly, as profiled guide optimisation is used so rarely in C and C++, Java and sun.misc.unsafe can in reality end up faster than the equivalent C.  I always feel like sticking my tongue out after I say that – not sure why.

Purists can sometimes hate sun.misc.unsafe as this now rather infamous post reveals.

“Let me be blunt — sun.misc.Unsafe must die in a fire. It is — wait
for it — Unsafe. It must go. Ignore any kind of theoretical rope and
start the path to righteousness /now/. It is still years until the
end of public updates to JDK 8, so we have /*years */to work this out
properly. But sticking our heads in the collective sands and hoping for
trivial work arounds to Unsafe is not going to work. If you’re using
Unsafe, this is the year to explain where the API is broken and get it
straight….

Please help us kill Unsafe, kill Unsafe dead, kill Unsafe right, and do
so as quickly as possible to the ultimate benefit of everyone.”

Well, as we say in England “That isn’t happening mate.”  As this post illustrateds, it is everywhere and everywhere it is it is essential.  My personal oss audio synthesis program Sonic Field uses sun.misc.unsafe to directly access the memory mapped files inside mapped direct by buffers.   Not only that but it then stores the addresses of each memory mapped segment in a larger file into off heap (malloc’ed) memory.  All this code might sound like it would be slow,  but because of the intrinsics allowing inlining it end sup much faster than using direct mapped byte buffers directly.  Further, because tall this memory is not garbage collected it does not move around in the virtual address space which helps optimise CPU data cache use.

Just as with my application, there are countless programs out there which use sun.misc.unsafe to allow Java to compete and sometimes beat C, C++ etc.  At least the JDK/JVM developers have now realised this. Mind you, their partial fix – variable handles – is mind blowingly clunky (as I suggested at the start of the post – Java seems to be going that way).  However, if it really is (or becomes) as fast as sun.misc.unsafe for managing memory fences and atomics then the clunkyness can be hidden inside libraries.  The good news is the developers have woken up to real community need and stopped drinking the abstraction/functional cool aid (a bit).  Some hope for a better, faster Java remains.  Though I am disappointed to see little evidence of proper off heap support in varhandles as yet. Hopefully, this will come, or is there but somehow hidden (feel free to comment on your thoughts).

Generics For Generic Programmers

I sort of understand what type erased homogeneous structural parametric typing is now – it has taken many years.

Java added generics in Java 5 to much fanfare; undoubtedly this was a big improvement to Java especially when considered in conjunction with autoboxing.  Suddenly a huge burden of type casing and boxing value types to reference types was removed from the programmer.  By so doing, Java’s type system became almost sound.  In other words, is the compiler was able to ‘see’ all the types being used via generics then the the program would be (almost) guaranteed to never throw a class cast exception as long as it compiled.

If you have never programmed Java pre-generics then it is probably hard to imaging what a pain in the posterior the old type system was. For example, a container like Vector was untyped; it contained indexed Objects.  All reference types in Java are subtypes of Object and thus the Vector could contain anything which was a reference type; indeed any mixture of anything.  Poor schmuck programmer had to cast what ever was retrieved from the Vector to an appropriate type before using it.  Worse, said programmer had to ensure only appropriate types made it into the Vector; this latter step being something of a challenge in complex systems with heterogeneous programming teams.

Needless to say, ClassCastException was a constant blight of Java programs.  Nowadays IDEs do a great job of warning about or even preventing usages prone to accidental NullPointerExceptions (predominantly) and generics get rid of ClassCastExceptions (mostly). Back in the early 2000s and before programming Java had four stages:

  1. Write the code.
  2. Compile the code
  3. Spend many, many hours/weeks/days fixing ClassCastExceptions and NullPointerExceptions.
  4. Get it to pass unit tests – return to 4 many times.

All this generic stuff ( is just great apart from – what the goodness me are wild cards?  Whilst we are at it, what is type erasure?

I felt I had to know and naturally I had to use both concepts to prove my metal as a Java programmer.  Except, well they are a bit tricky. Now I have 2 JVM compilers under my belt and also worked in commercial C++ programming a lot more, I guess I have a pretty good idea of what type erasure is.  Further, Java does not really use type erasure (don’t shout). What actually happens is the type is erased in executed byte code; the annotated byte code still has the types in there. In other words, we rely on the compiler to get types correct not the runtime and the compiler is not erasing type at the AST/Type-System level.  This is also true of, for example, C++ when it inlines methods.  The type of the inlined method is completely erased during compilation but will be left in the debug info (at least for modern versions of C++). However, we do not call this type erasure. Funny how reality and ivory tower type discussions are so distant so often ( by the height of the titular tower I guess).

Wild cards are another issue all together. I find them resistant to usefulness in the same way monads are.  I can understand wild cards, or briefly d monads but in the real world I need to get work done so the cognitive burden of doign so not worth the effort.

By example, let’s look at some Oracle documentation on the subject:

List<EvenNumber> le = new ArrayList<>();
List<? extends NaturalNumber> ln = le;
ln.add(new NaturalNumber(35)); // compile-time error

However, the following is much simpler:

List<NaturalNumber> ln = new List<>();
ln.add(new NaturalNumber(35)); // This is fine.

When might I actually need the wild card behavior in a real program? Even if I did need it the following also works:

class ConcreateNaturalNumber() extends NaturalNumber{}
class EvenNumber extends NaturalNumber{
  // Stuff
}
List<ConcreateNaturalNumber> ln = new List<>();
ln.add(new NaturalNumber(42)); // Compile time error.

One way of looking at this is that List<? extends NaturalNumber> defines a new type implicitly; that type being ‘Any child of NaturalNumber’.  Whilst this seems like a good way to make the type system complete and might be of use for library developers, for simple mortals like myself, if I want a new type, why not explicitly create it?

So, generics seems overwhelmingly complex because of the embedded concepts of type erasure and wild cards.  However, over time the Java community has learned to largely concentrate on a subset of Generics which uses explicit types and largely ignores erasure (just let the compiler and runtime do that under the covers). Hence, nowadays generic programmers like myself can use generics without having to get all concerned about corner cases and complex type rules.

This is something I really like about the Java community; it likes to go for what works.  This is in contrast to what I see in the C++ world where people look for every strange edge case which can be exploited and then do so just to prove they are clever enough.

Whilst I Am Typing About Type What Other Types Of Type Do Java Types Have TO Understand Whilst Typing?

We could easily fall into the illusion that Object Hierarchical  and Nominative Parametric typing is all that Java does; but no that is so very far from the case.

Java moved away from object orientation in 1997 (yes really) with the introduction of the reflection API. To get a good feel for what that felt like at the time, this article was contemporary to the release (it talks about Java beans – do you remember those?).  Suddenly Java had full duck typing.  In other words, we could go look up a method on a class and call it without needing to know anything about the type of the class other than its name.  Say there is a method:

void wagTail(){
   // some stuff.
}

In two unrelated classes say, “CustomerService” and “Dog”. With reflection objects of both CustomerService and Dog can have their tails wagged (what ever that might mean – no concept of a contract is even implied) without needing a common base class.

This took a chain saw to some fundamental concepts in Java and still has huge ramifications to this day. Some people (myself included) would rather have static typing with compile time type checked dynamic dispatch.  Others (seeming most Java programmers) want to have full runtime dynamic dispatch and bypass static type checking.

Sure, full runtime dynamic dispatch with runtime type checking sort of works.  For example, Python does a great job of this with Python programmers being accustomed to adding extra duck type management code to keep stuff stable.  For Java, the implications could have been disastrous but actually (100% personal view warning) I suspect what it really did was force the development of Junit and other Java unit testing methodologies to the very sophisticated level they have now reached.  If you chuck compile time type checks out the window, you absolutely have to test the excrement out of your code and Java has been a world leader in this area.

I do find the current state of affairs where Maven and dependency injection work together to make absolutely certain that one has no idea at all what code will actually execute at any point rather depressing. Having said that, it seems to work well for the Java community and one does not have to write code that way (I don’t in Java at least).  Having seen multi-million line code bases in Python work just fine, my queasiness over runtime dynamic dispatch has dissipated somewhat.  Live and let live might be a good approach here.

Nevertheless, runtime duck typing was not sufficient for the world of Java.  More typing and dispatch systems had to be found to make Java more powerful, clunky, hard to understand and lucrative for the programmer!

First and by far the most evil of these was/is code weaving.  Take an innocent looking class and stick on an annotation.  Then, at runtime, this class has it very code rewitten to make it dispatch off to other code and completely alter its behavior (Think Universal Soldier).  With this came aspect oriented programming which was both cross cutting and a major concern.  I guess I should not be too vitriolic, after all code weaving did sort of help out with the whole POJO and Spring movement.

My understanding is that Spring does not require code weaving any more.  It dynamically compiles proxy classes instead of add aspects to class behavior.  The outcome from the programmer point of view is much the same.  Slamming on the breaks pretty hard is required now because… Spring and and POJOs in general acted as a counter weight to J2EE/ JEE and before hadoop even was a big thing, helped save Java from a slow grey death.  Indeed JEE learned a bucket load back from Spring and the aspect community so all around, the outcome was good.

Not satisfied with all these the JDK developers want to have some new type concepts.  First came type inference.  Now C# started with this by introducing the var keyword.  In an insane fit of ‘not invented here syndrome’ Java went with diamond operators.  These are better than nothing in the say way stale bread is better than starving.

Having Homer Simpson levels ‘half-assed‘ it with <> they went full bore with Lambdas. From this article we get the following example:

 n -> n % 2 != 0;
 (char c) -> c == 'y';
 (x, y) -> x + y;
 (int a, int b) -> a * a + b * b;
 () -> 42
 () -> { return 3.14 };
 (String s) -> { System.out.println(s); };
 () -> { System.out.println("Hello World!"); };

So “(x,y) -> x + y;” is a thing but “var x = 1;” is not.  Yey, that makes perfect sense.  Though in truth, it is really nice to have type inference in lambdas. If only they were first order referential closures rather than only supporting second order referential semantics (they close around effectively final state but can mutate references inside that state) they would be truly useful.  As it is, they cannot guarantee to have no side effects but they are not a full closure implementation.

Not yet convinced about second order referencing, try this:

LongFunction<Long> broken = chunks -> {reportTicker.set(chunks); return chunks % 10;};

I just checked this compiles – and it does. The final (or effectively final) reportTicker object is mutated by the lambda broken.  So effective finallity adds no guarantees to lambdas from a state point of view.  Lambdas are ordinary objects in a multi-threading context and are no easier to reason about than anonymous classes. All that effort to create lambdas and they ended up being syntactic sugar around anonymous classes (with a more complex implementation using invokedynamic). Still not convinced?  Here is the above lambda written using an anonymous class.

LongFunction<Long> broken = chunks -> new LongFunction<Long>()
{
    @Override
    public Long apply(long value)
    {
        reportTicker.set(chunks);
        return chunks % 10;
    }
}.apply(chunks);

At least the streaming interface design was so woeful and fork/join threading so narrow in application that it makes Java lambdas look truly excellent in comparison.

If you do not like what I am saying here, just use C++11 lambdas as first class referential closures and see how very, very powerful a way of programming that is.

So, that really has to be the end of it surely?  Those Java/JDK developers would not go an introduce another type system would they?  That would be bonkers…

Well they did – run time parameterised polymorphism; mad as a box of frogs but ultimately quite useful.  If Java’s type system had not already been pretty much a canonical example of the second law of thermodynamics – adding a new type/dispatch system would have a beenvery poor move but the horse is well and truly out the gate and set up a nice little herd of mustang in the mountains far away so ‘why not?’

VarHandles – what fun:

“The arity and types of arguments to the invocation of an access mode method are not checked statically. Instead, each access mode method specifies an access mode type, represented as an instance of MethodType, that serves as a kind of method signature against which the arguments are checked dynamically. An access mode type gives formal parameter types in terms of the coordinate types of a VarHandle instance and the types for values of importance to the access mode. An access mode type also gives a return type, often in terms of the variable type of a VarHandle instance. When an access mode method is invoked on a VarHandle instance, the symbolic type descriptor at the call site, the run time types of arguments to the invocation, and the run time type of the return value, must match the types given in the access mode type. A runtime exception will be thrown if the match fails.”

I could not possibly add anything to this other than it gets more amusing each time I read it.  I guess I have to get my kicks someplace.

Kafka, Spark And The Unbelievable Cassandra

Second generation cloud systems are now abounding and Java is once again leading the pack.  Whilst some cloud development is moving to C++ with notable players like Impala using some and Scylla using only this language it is still fair to say most OSS cloud infrastructure work is either in Java or runs on the JVM.  For example, SPARK which seems to have grown from a spark to a forest fire over recent months is written in Scala.  I am not sure why anyone would want to do such a thing, but there it is and it works and is gaining traction all the time.

With these players comes a bright future for Java.  Obsolescence’s dark cloak is no where to be seen.  Though I do not view the next decade as challenge free as I will discuss in the next section.

Monolith Ground To Sand

Java and the JVM have some basic concepts baked into them from day one.  As I discussed earlier, one of these is resource asymmetry.  Another is a closed sandbox.  This really made sense when Java was originally designed to run as a protected process in an applet and had no access to the OS from user source code.  In this model the Java language coupled tightly to its development kit had to provide everything required to perform desired tasks.  Microsoft’s absolute failure of concept in designing Azure to be pure .Net with no concept of machines and no Linux illustrates how this approach is utterly inappropriate for cloud computing.

Changes in computational hardware are not helping Java.  As I mentioned previously, numa is a poor fit for Java.  Even with numa aware garbage collection, the performance of one huge JVM on a server is strangled by the partitioned nature of that server.

To be challenging: “Does a large, multi-threaded, singleton VM make any sense when all serious computing requires the collaboration of many computers.”

Consider this, to compute something serious with my current employer requires tens of thousands of compute cores.  In other words, computations are not done at the server level but at the core and program level distributed across many servers.  That there are even servers present is not seen by the end programmer.  As such, the JVM becomes a barrier not a benefit.  Is it logical to have one huge JVM on each of many servers?  Probably not.  But then is it logical to have 32 small JVMs running on a server?  Given that the JVM is not designed to do this and is not designed to be started up and brought down in short cycles, there are huge challenges in this area.

Having said that – as always Java is regenerating.  Start up times were reduced by the split varifier (well – I have been told that, I am not so sure in reality) and JDK sizes are now being controlled better using modules.  As such startup/shutdown should be better now.  However, as one cannot fork a JVM, it will never be able to compete with other systems (C++, C, Rust, Python etc) which can use a fork and run model in the cloud.

I am not sure where the future lies in this regard.  It could be that the challenges of running large singlton JVMs in the cloud are not enough to deter people.  If this is so, the Monolith will continue.  If not then Java and the JVM might have to fully regenerate once more to become light weight.  That would be an impressive trick which I for one have never yet managed to pull off.

PS

Just in case I have not offended someone someplace, here are a bunch of things I should have discussed at length but felt the rant had gone on long enough:

  • Try with resources: Excellent.
  • Maven: Abomination.
  • Gradle: I did not think something could be worse than make, but it was achieved.
  • Swing: Cool but the web ate its lunch.
  • nio: Really good when it came out but needs a good polish up soon.
  • Valhalla: Could have been great but making value types immutable cripples the concept.  Reified intrinsic generic containers will be good.
  • Invoke dynamic: Too static but has promise.
  • Jmh: Brilliant and about time.
  • Ant: If only it was not XML it would be 4 out of 5 stars.
  • Mocking frameworks: Yes – I guess so but most of the time they seem over used.
  • G1 Garbage collector: As I am not convinced huge JVMs make sense, thus it is not clear G1 was necessary but it is definitely not a bad thing.
  • JVMTI: Awesome.
  • Inner Classes: Yes they were invented and not part of the original Java and they are lovely.
  • OSGI: Life is too short.
  • Jigsaw: More like it.
  • Scala: Much like a Delorean, looks really cool but is ridiculously slow, hard to get started and breaks all the time.
  • The rest: Sorry I forgot about you, Java is so huge there is necessarily so much to forget about.

 

Author: Alexander Turner

Life long technologist and creative.

Next Post

Previous Post

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

© 2024 JVM Advent | Powered by steinhauer.software Logosteinhauer.software

Theme by Anders Norén