Tag Archives: stack

JDK, JRE, JIT,SDK, JVM Introduction


JDK (Java  Development Kit)

JDK (Java Development Kit) is a superset of JRE (Java Runtime Environment), it contains everything that JRE has along with development tools such as compiler, debugger, etc.

JDK Java development Kit
Java Development Kit

See Also: Java: Program Execution

JRE (Java Runtime Environment)

JRE (Java Runtime Environment) provides the environment in which the JVM (Java Virtual Machine) runs. JRE contains JVM, class libraries, and other files excluding development tools such as compiler and debugger.

It means, you can run the code in JRE but you can’t develop and compile the code in JRE.

JDK and JRE Formulae

JVM (Java Virtual Machine)

JVM (Java Virtual Machine) runs the program by using class, libraries, and files provided by JRE. JVM able to run a program written in Java and other languages also that are compiled to Java byte code. For Example Jython, Jruby, Closure, Apache, Groovy, Kotlin, etc.

JVM Architecture

Now discussed terminology used for JVM.

Class Loader

The class loader reads the .class file and saves the byte code in the method area.

Method Area

Method area holds class level information of .class file. JVM jave only one method area which is shared among all the classes.

Heap

Heap is a JVM memory part where objects are allocated. JVM creates an object for each .class Class file.

Stack

The stack is  JVM memory part but unlike Heap, it is used for storing temporary variables i.e method parameters.

PC Registers

PC Registers use to keeps the track of which instruction has been executed and which one is going to be executed. Because instructions are executed by threads, each thread has a separate PC register.

JIT Compiler

The JIT also called a Just-In-Time compiler. It used when a method is called. The JIT compiles the bytecode of that called method into native machine code. When a method has been compiled in native machine code, the JVM calls the compiled code of that method directly instead of interpreting it.

Native Method Stack

A native method used to access the runtime data areas of the virtual machine.

Native Method interface

It enables java code to call or be called by native applications wiritten in C or C++. Native applications are programs in low-level language that is specific to the hardware and OS of a system.

Garbage collection

Garbage Collection use for automatic memory management by JVM. It destroys unreferenced objects from Heap so that allocates more memory for new objects.

JDK Architecture & API’s Details

In this figure, you will get an idea of how these libraries and API’s are distributed on different levels.

JDK APIs Architecture

Difference between API and Methods

API (Application Programming Interface) interfaces that the rest of the world sees and can use. A Method can be part of the public interface or not. But an API executes a set of methods.

In java, APIs provide through the interface which is really a set of public methods. An API has contract like tells about method signatures and return type.

For Example List API’s provide different method signature and expected result as return type so that you can use according to your convenience.

Difference between JDK and SDK

JDK(Java Development Kit) is an extended subset of a SDK (Software Development Kit).

  • JDK includes tools for developing, debugging and monitoring of Java Program. It mainly responsible for the writing and running of Java programs.
  • SDK is composed of extra software related to a Web application or mobile application etc. such as Application Server, Documentation, Debuggers, Code Samples, Tutorials, GlassFish server, MySQL and IDE Netbeans.

Now, Oracle strongly suggests using the term JDK to refer to the Java SE Development Kit. The Java EE SDK is now available with or without the JDK, by which they specifically mean the Java SE 7 JDK.

See Also: Java Versions History

 

Advertisements

Data Structure and Algorithms Complexity (Big-O)


Big-O notation is a mathematical representation used to describe the complexity of a data structure and algorithm. There are two types of Complexity :

  1. Time Complexity: Its measure based on steps need to follow for an algorithm.
  2. Space Complexity: It measures the space required to perform an algorithm and data structure.

Data Structure and Algorithm Decision

Data structure and algorithm decisions are based on the complexity of size and operations need to perform on data. Some algorithms perform better on a small set of data while others perform better for big data sets for certain operations.

These days Space Complexity is not big concerns but the main performance of an algorithm measures based on time complexity. We can make the decision to choose an algorithm based on below performance criteria with respect to Complexity.

Complexity Performance
O(1) Excellent
O(log(n)) Good
O(n) Fair
O(n log(n)) Bad
O(n^2) Horrible
O(2^n) Horrible
O(n!) Horrible

This post almost covers data structure and algorithms space and time Big-O complexities. Here you can also see time complexities for types of operations like access, search, insertion, and deletion.

See also: 

Data Structure Space and Time Complexity

Data Structure Time Complexity Space Complexity
Average Worst
Access Search Insertion Deletion
Array Θ(1) Θ(n) Θ(n) Θ(n) O(n)
Stack Θ(n) Θ(n) Θ(1) Θ(1) O(n)
Queue Θ(n) Θ(n) Θ(1) Θ(1) O(n)
Singly-Linked List Θ(n) Θ(n) Θ(1) Θ(1) O(n)
Doubly-Linked List Θ(n) Θ(n) Θ(1) Θ(1) O(n)
Skip List Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n log(n))
Hash Table N/A Θ(1) Θ(1) Θ(1) O(n)
Binary Search Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)
Cartesian Tree N/A Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)
B-Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)
Red-Black Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)
Splay Tree N/A Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)
AVL Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)
KD Tree Θ(log(n)) Θ(log(n)) Θ(log(n)) Θ(log(n)) O(n)

Sorting Algorithms Space and Time Complexity

Algorithm Time Complexity Space Complexity
Best Average Worst Worst
Quick Sort Ω(n log(n)) Θ(n log(n)) O(n^2) O(log(n))
Merge Sort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(n)
Tim Sort Ω(n) Θ(n log(n)) O(n log(n)) O(n)
Heap Sort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(1)
Bubble Sort Ω(n) Θ(n^2) O(n^2) O(1)
Insertion Sort Ω(n) Θ(n^2) O(n^2) O(1)
Selection Sort Ω(n^2) Θ(n^2) O(n^2) O(1)
Tree Sort Ω(n log(n)) Θ(n log(n)) O(n^2) O(n)
Shell Sort Ω(n log(n)) Θ(n(log(n))^2) O(n(log(n))^2) O(1)
Bucket Sort Ω(n+k) Θ(n+k) O(n^2) O(n)
Radix Sort Ω(nk) Θ(nk) O(nk) O(n+k)
Counting Sort Ω(n+k) Θ(n+k) O(n+k) O(k)
Cube Sort Ω(n) Θ(n log(n)) O(n log(n)) O(n)