Wednesday, August 7, 2019

A Maven Parent POM for a Spring Boot based application

Spring Boot based Java application is a good choice for new Java applications. The Spring Boot eco-system provides numerous libraries and tools.

In order to better manage the versions of huge amount of libraries, it is a good approach to upgrade the libraries along with Spring Boot upgrades. It will save huge maintenance and integration effort by taking advantage of the work Spring Boot team has already done.

The following sample parent POM provides the following customized features:

  1. removed the Logback and Appache Commons Logging related dependencies
  2. added unit test dependencies
  3. enabled resource filtering
  4. configured Google stype check




<?xml version="1.0" encoding="UTF-8"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <!-- This is parent pom for Spring Boot based application. The pom uses spring-boot-dependencies to control dependencies 
    in order to reduce dependency related maintenance effort. The intention is that all applications will go and update along 
    with Spring Boot latest versions and we only define the versions that are required to be overridden here. -->
  <parent>
    <groupId>com.urcorp</groupId>
    <artifactId>urcorp</artifactId>
    <version>1.0</version>
  </parent>

  <groupId>com.urcorp.urdepartment</groupId>
  <artifactId>app-springboot-parent</artifactId>
  <version>0.0.1-SNAPSHOT</version>
  <packaging>pom</packaging>
  <name>app-springboot-parent</name>
  <description>Parent POM for Spring Boot based Application</description>

  <properties>
    <maven.build.timestamp.format>yyyy-MM-dd'T'HH:mm:ss'Z'</maven.build.timestamp.format>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>

    <!-- lastest versions as of 08/06/2019 -->

    <!-- Spring Boot dependencies -->
    <!-- spring.version from spring-boot.version below: 5.1.7.RELEASE -->
    <spring-boot.version>2.1.5.RELEASE</spring-boot.version>

    <!-- Test scope -->
    <awaitility.version>3.1.6</awaitility.version>

    <!-- Maven plugins -->
    <google.format.version>1.5</google.format.version>
    <junit-vintage-engine.version>5.5.1</junit-vintage-engine.version>
    <maven-compiler-plugin.version>3.8.1</maven-compiler-plugin.version>
    <maven-surefire-plugin.version>2.22.2</maven-surefire-plugin.version>
    <spotless-maven-plugin.version>1.23.0</spotless-maven-plugin.version>
  </properties>

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-dependencies</artifactId>
        <version>${spring-boot.version}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>

      <!-- Remove dependencies on logback -->
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter</artifactId>
        <version>${spring-boot.version}</version>
        <exclusions>
          <exclusion>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-logging</artifactId>
          </exclusion>
          <exclusion>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-access</artifactId>
          </exclusion>
          <exclusion>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
          </exclusion>
          <exclusion>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
          </exclusion>
          <exclusion>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-core</artifactId>
          </exclusion>
          <exclusion>
            <groupId>org.slf4j</groupId>
            <artifactId>log4j-over-slf4j</artifactId>
          </exclusion>
        </exclusions>
      </dependency>

      <!-- Remove dependencies on commons-logging -->
      <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-core</artifactId>
        <exclusions>
          <exclusion>
            <groupId>commons-logging</groupId>
            <artifactId>commons-logging</artifactId>
          </exclusion>
        </exclusions>
      </dependency>
    </dependencies>
  </dependencyManagement>

  <dependencies>
    <!-- ============== Compile & Runtime ============== -->
    <!-- WARNING: be VERY selective. Dependencies you add here are pulled into every sub-module. -->
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter</artifactId>
    </dependency>

    <!-- ==================== Tests ==================== -->
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <scope>test</scope>
    </dependency>

    <!-- JUnit -->
    <dependency>
      <groupId>org.junit.platform</groupId>
      <artifactId>junit-platform-launcher</artifactId>
      <scope>test</scope>
    </dependency>

    <!-- JUnit 5 -->
    <dependency>
      <groupId>org.junit.jupiter</groupId>
      <artifactId>junit-jupiter-engine</artifactId>
      <scope>test</scope>
    </dependency>

    <dependency>
      <groupId>org.junit.jupiter</groupId>
      <artifactId>junit-jupiter-params</artifactId>
      <scope>test</scope>
    </dependency>

    <!-- JUnit 4 -->
    <dependency>
      <groupId>org.junit.vintage</groupId>
      <artifactId>junit-vintage-engine</artifactId>
      <scope>test</scope>
    </dependency>
    
    <!-- mock -->
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-inline</artifactId>
      <scope>test</scope>
    </dependency>

    <!-- asynchronous systems testing -->
    <dependency>
      <groupId>org.awaitility</groupId>
      <artifactId>awaitility</artifactId>
      <version>${awaitility.version}</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <build>
    <resources>
      <resource>
        <directory>src/main/resources</directory>
        <filtering>true</filtering>
      </resource>
      <resource>
        <directory>src/test/resources</directory>
        <filtering>true</filtering>
      </resource>
    </resources>
    <plugins>
      <plugin>
        <groupId>com.diffplug.spotless</groupId>
        <artifactId>spotless-maven-plugin</artifactId>
        <version>${spotless-maven-plugin.version}</version>
        <configuration>
          <java>
            <includes>
              <!-- Include all java files in "src" folder -->
              <include>src/**/*.java</include>
            </includes>
            <googleJavaFormat>
              <!-- Optional, available versions: https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.google.googlejavaformat%22%20AND%20a%3A%22google-java-format%22 -->
              <version>${google.format.version}</version>
              <!-- Optional, available versions: GOOGLE, AOSP https://github.com/google/google-java-format/blob/master/core/src/main/java/com/google/googlejavaformat/java/JavaFormatterOptions.java -->
              <style>GOOGLE</style>
            </googleJavaFormat>
            <removeUnusedImports />
          </java>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>${maven-compiler-plugin.version}</version>
        <configuration>
          <source>${maven.compiler.source}</source>
          <target>${maven.compiler.source}</target>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>${maven-surefire-plugin.version}</version>
        <configuration>
          <excludes>
          </excludes>
        </configuration>
      </plugin>
    </plugins>
  </build>

</project>



Monday, January 29, 2018

Cassandra vs Oracle

Source:
https://www.datastax.com/wp-content/uploads/2013/11/WP-DataStax-Oracle.pdf

Oracle is a solid RDBMS that performs well for the use cases for which it was designed (e.g. ERP and accounting applications). It is not architected to tackle the new wave of big data, online applications developed today. The scale- up, master-slave, non-distributed architecture of Oracle falls short of what modern online applications need.

Scalability and Performance Limitations 

Oracle’s scale-up, master-slave design limits both its scalability and performance for servicing the online elasticity and performance SLA needs of many online applications. 
The failure of Oracle to add capacity online in an elastic, scale-out vs. scale-up manner to service increasing user workloads, keep performance high, and easily consume fast incoming data from countless geographical locations is widely recognized.

Benefits of Cassandra

  • Massively scalable architecture – a masterless design where all nodes are the same.
  • Linear scale performance – online node additions produce predictable increases in performance.
  • Continuous availability – redundancy of both data and function mean no single point of failure.
  • Transparent fault detection and recovery – easy failed node recovery.
  • Flexible, dynamic schema data modeling – easily supports structured, semi-structured, and unstructured data.
  • Guaranteed data safety – commit log design ensures no data loss.
  • Active everywhere design – all nodes may be written to and read from.
  • Tunable data consistency – support for strong or eventual data consistency.
  • Multi-data center replication – cross data center and multi-cloud availability zone support for writes/reads built in.
  • Data compression – data compressed up to 80% without performance overhead.
  • CQL (Cassandra Query Language) – an SQL – like language that makes moving from an RDBMS very easy.

DataStax Cassandra

  • Built-in analytics functionality for Cassandra data via integration with a number of Hadoop components (e.g. MapReduce, Hive, Pig, Mahout, etc.)
  • Enterprise search capability on Cassandra data via Solr integration.
  • Enterprise security including external/internal authentication and object permission management, transparent data encryption, client-tonode and node-to-node encryption, and data auditing.
  • Visual cluster management for all administration tasks including backup/restore operations, performance monitoring, alerting, and more.

Cassandra Use Cases

  • Time-series data management (e.g. financial, sensor data, web click stream, etc.) 
  • Online web retail
  • Web buyer behavior and personalization management
  • Recommendation engines
  • Social media input and analysis
  • Online gaming
  • Fraud detection and analysis
  • Risk analysis and management
  • Supply chain analytics
  •  Web product searches 
  •  Write intensive transactional systems

Data Modeling Differences

In traditional databases such as Oracle, data is modeled in a standard “third normal form” design without the need to know what questions will be asked of the data. 
By contrast, in NoSQL, the questions asked of the data are what drive the data model design and the data is highly de-normalized.

Data Processing Concerns of Modern Applicaiton

Legacy Application
Modern Application
Slow/medium velocity data
High velocity data
Data coming in from one/few locations
Data coming in from many locations
Rigid, static structured data
Flexible, fluid, multi-type data
Low/medium data volumes; purge often
High data volumes; retain forever
Deploy app central location/one server
Deploy app everywhere/many servers
Write data in one location
Write data everywhere/anywhere
Primary concern: scale reads
Scale writes and reads
Scale up for more users/data
Scale out for more users/data


DataStax Cassandra vs. Oracle at Functional Level

Feature/Function
DataStax/Cassandra
Oracle RDBMS
High Availability
Continuous availability with built in redundancy and hardware rack awareness in both single and multiple data centers
General replication; Oracle Dataguard (for failover) and Oracle RAC (single point of failure with storage) bout of which are expensive add-ons. GoldenGate also offered for certain use cases.
Scalability Model
Linear performance gains via node additions
Scale up via adding CPU’s RAM or Oracle RAC or Exadata
Replication Model
Peer-to-peer; number of copies configurable across cluster and each datacenter.
Peer-to-peer; number of copies configurable across cluster and each datacenter
Multi-data center/geography/cloud capabilities
Multi-directional, 1-many data center support built in, with true read/write anywhere capability
Nothing specific for multi-data center
Data partitioning/sharding model
Automatic; done via primary key; random or ordered
Table partitioning option to enterprise edition; manual server sharding
Data volume support
TB-PB capable
TB capable; PB with Exadata
Analytic support
Analytics on Caddadra data via Hadoop integration( MapReduce, Hive, Pig, Mahout )
Analytic functions in Oracle RDBMS via SQL MapReduce. Haddop support done in NoSQL appliance
Enterprise search support
Built into dataStax Enterprise via Solr integration
Handled via Oracle search (cost add-on)
Mixed workload support
All handled in one cluster with built-in workload isolation; no workload competes for resources with another
Handled via Oracle Exadata
Data Model
Google Bigtable like; a wide column store
Relational/tabular
Flexibility of data model
Flexible. Designed for structured, semi-structured, and unstructured data
Rigid; primarily structured data
Data consistency mode
Tunable consistency (CAP theorem consistency per operation (e.g. per insert, delete, etc.) across cluster
Traditional ACID
Transaction Support
Provides full Atomic, Isolated, and Durable (AID) transactions including batch transactions and “lightweight” transactions with Cassandra 2.0 and higher
Traditional ACID
Security
Support for all key security needs: Login ID/passwords, external security support; object permission management; transparent data encryption; client to mode, node to node encryption; data auditing
Full security support
Storage model
Targeted directories with separation (e.g. put some column families on SSD’s, some on spinning disk)
Tablespaces
Data compression
Built in
Various methods
Memory usage model
Distributed object/row caches across all nodes in a cluster
Standard data/metadata caches with query cache
Logical database container
Keyspace
Database
Primary data object
Column family/table
Table
Data variety support
Structured, semi-structured, unstructured
Primarily structured
Indexes
Primary, secondary. Extensible via Solr indexes
B-Tree, bitmap, clustered, others
Core language
CQL (Cassadra Query Language; resembles SQL)
SQL
Primary query utilities
CQL shell
SQL*Plus
Visual query tools
DataStax DevCenter and 3rd party support (Aqua data Studio)
SQL Developer from Oracle etc.
Development Language support
Many (Java, C#, Python)
Many
Geospatial support
Done via Solr ingtegration
Oracle Geospatial option (cost add-on)
Logging (e.g., web, application) data support
Handled via log4j
Nothing built in
Backup/recovery
Online, point-in-time restore
Online, point-in-time restore
Enterprise management/monitoring
DataStax OpsCenter
Oracle Enterprise Manager

Thursday, November 16, 2017

Postgres JSONB storage capabilities

JSONB Features

  • JSONB data input is little slower, but processing is then significantly faster because the data does not need to be re-parsed
  • JSONB could be restricted by data constraint and validation functions
  • JSONB is a efficient representation with indexing capability
  • JSONB is efficient in the storage and retrieval of JSON documents, but the modification of individual fields requires the extraction of the entire JSON document

Rapid Prototyping

  • The data stored is schema-less, as the business requirements rapidly change there is no effort needed to continuously write migrations
  • No effort is required to think through a data-model, ensuring proper normalization
  • No need to write SQL
  • The data is sub-optimal importance, it is acceptable of rarely data loss or corruption, thus the strong guarantees provided by a standard RDBMS are not necessary

ACID

Atomicity

each transaction should be "all or nothing"

Consistency

any transaction will bring the database from one valid state to another, any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof.

Isolation

the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially, i.e., one after the other.

Durability

Once a transaction has been committed, it will remain so, even in the event of power loss.

CAP Theorem

In the presence of a network partition, one has to choose between consistency and availability.

Consistency

Every read receives the most recent write or an error

Availability

Every request receives a (non-error) response - without guarantee that it contains the most recent write

Partition Tolerance

The system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes

It is really just A vs C

Availability is achieved by replicating the data across different machines
Consistency is achieved by updating several nodes before allowing further reads
Total partitioning, meaning failure of part of the system is rare. However, we could look at a delay, a latency, of the update between nodes, as a temporary partitioning. It will then cause a temporary decision between A and C:
1. On systems that allow reads before updating all the nodes, we will get high availability
2. On systems that lock all the nodes before allowing reads, we will get consistency

Level of Transaction Isolation

Read Committed
Repeatable Read
Serialization

Document Database

are designed to store semi-structured data that there is no clear separation between the data's schema and the data itself

Column-oriented DBMS

is a database management system (DBMS) that stores data tables by column rather than by row. A column-oriented database serializes all of the values of a column together, then the values of the next column, and so on.

Third Normal Form (3NF)

Each attribute contains only atomic values.
No data is redundantly represented based on any non-unique subsets. For every unique set of the entries (a candidate key), no other attribute depends on any subset of the candidate key
No data is dependent on anything other than the key


Tuesday, November 14, 2017

VirtualBox: key concepts and features

Key Concepts

·        Host operating system
·        Guest Operating System
·        Guest Additions
·        Virtual Machine (VM)
the special environment that VirtualBox creates for a guest OS while it is running.

Virtual Networking

VirtualBox provides up to 8 virtual PCI Ethernet cards. for each virtual machine.

Networking Modes

·        Network Address Translation (NAT)
It is the default value, it is used when no services are hosted. Services will be reachable with 127.0.0.1:port
·        Bridged Networking
It provides direct external access to the VMs. VMs will user the host physical network adapter directly.
·        Internal Networking
It is used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Every internal network is identified simply by its name, once there is more than one active virtual network card with the same internal network ID, the VirtuaBox support driver will automatically "wire" the cards and act as a network switch.
·        Host-only Networking
A virtual network interface is created on the host, it provides the connectivity among virtual machines and the host. A service hosted in a VM could use a Host-only Networking to talk to a database hosted in another VM, then use a bridged networking to expose services.

Remote Display (VRDP) Support

VirtualBox Remote Display Protocol (VRDP) is a backwards-compatible extension to Microsoft's Remote Desktop Protocol (RDP).

VBoxManage

Command-line interface to VirtualBox.

Importing and exporting Virtual Machines

VMs could be imported and exported using the Open Virtualization Format (OVF).

Wednesday, November 1, 2017

Oracle Locks: the V$LOCK v$session views and dba_ojbects

The original article is here. Save it here in case it is not available for some reason. It is one of the best article about this topic out there.



What's blocking my lock?


Natalka Roshak's picture

articles: 
If you've ever gotten a phone call from an annoyed user whose transaction just won't go through, or from a developer who can't understand why her application sessions are blocking each other, you know how useful it can be to identify not just whose lock is doing the blocking, but what object is locked. Even better, you can identify the exact row that a session is waiting to lock.

Create a blocking lock

To begin, create a situation where one user is actively blocking another. Open two sessions. Issue the following commands in Session 1 to build the test table:
SQL> create table tstlock (foo varchar2(1), bar varchar2(1));

Table created.

SQL> insert into tstlock values (1,'a'); 

1 row created.

SQL> insert into tstlock values (2, 'b');

1 row created.

SQL> select * from tstlock ;

FOO BAR
--- ---
1   a
2   b

2 rows selected.

SQL> commit ;

Commit complete.
Now grab a lock on the whole table, still in Session 1:
SQL> select * from tstlock for update ;
And in Session 2, try to update a row:
SQL> update tstlock set bar=
  2  'a' where bar='a' ;
This statement will hang, blocked by the lock that Session 1 is holding on the entire table.

Identify the blocking session

Oracle provides a view, DBA_BLOCKERS, which lists the SIDs of all blocking sessions. But this view is often, in my experience, a good bit slower than simply querying V$LOCK, and it doesn't offer any information beyond the SIDs of any sessions that are blocking other sessions. The V$LOCK view is faster to query, makes it easy to identify the blocking session, and has a lot more information.
SQL> select * from v$lock ;

ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60        479 TX     131078      16739          0          6        685          0
ADDF7EC8 ADDF7EE0        422 TM      88519          0          3          0        697          0
ADDF7F74 ADDF7F8C        479 TM      88519          0          3          0        685          0
ADEBEA20 ADEBEB3C        422 TX     131078      16739          6          0        697          1
....     ....            ... ...      ....       ....       ....       ....        ....      ....
Note the BLOCK column. If a session holds a lock that's blocking another session, BLOCK=1. Further, you can tell which session is being blocked by comparing the values in ID1 and ID2. The blocked session will have the same values in ID1 and ID2 as the blocking session, and, since it is requesting a lock it's unable to get, it will have REQUEST > 0.
In the query above, we can see that SID 422 is blocking SID 479. SID 422 corresponds to Session 1 in our example, and SID 479 is our blocked Session 2.
To avoid having to stare at the table and cross-compare ID1's and ID2's, put this in a query:
SQL> select l1.sid, ' IS BLOCKING ', l2.sid
  2  from v$lock l1, v$lock l2
  3  where l1.block =1 and l2.request > 0
  4  and l1.id1=l2.id1
  5  and l1.id2=l2.id2
SQL> /

       SID 'ISBLOCKING'         SID
---------- ------------- ----------
       422  IS BLOCKING         479

1 row selected.
Even better, if we throw a little v$session into the mix, the results are highly readable:
SQL> select s1.username || '@' || s1.machine
  2  || ' ( SID=' || s1.sid || ' )  is blocking '
  3  || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
  4  from v$lock l1, v$session s1, v$lock l2, v$session s2
  5  where s1.sid=l1.sid and s2.sid=l2.sid
  6  and l1.BLOCK=1 and l2.request > 0
  7  and l1.id1 = l2.id1
  8  and l2.id2 = l2.id2 ;


BLOCKING_STATUS
----------------------------------------------------------------------------------------------------
BULKLOAD@yttrium ( SID=422 )  is blocking BULKLOAD@yttrium ( SID=479 )

1 row selected.
There's still more information in the v$lock table, but in order to read that information, we need to understand a bit more about lock types and the cryptically-named ID1 and ID2 columns.

Lock type and the ID1 / ID2 columns

In this case, we already know that the blocking lock is an exclusive DML lock, since we're the ones who issued the locking statement. But most of the time, you won't be so lucky. Fortunately, you can read this information from the v$lock table with little effort.
The first place to look is the TYPE column. There are dozens of lock types, but the vast majority are system types. System locks are normally only held for a very brief amount of time, and it's not generally helpful to try to tune your library cache, undo logs, etc. by looking in v$lock! (See the V$LOCK chapter in the Oracle Database Reference for a list of system lock types.)
There are only three types of user locks, TX, TM and UL. UL is a user-defined lock -- a lock defined with the DBMS_LOCK package. The TX lock is a row transaction lock; it's acquired once for every transaction that changes data, no matter how many objects you change in that transaction. The ID1 and ID2 columns point to the rollback segment and transaction table entries for that transaction.
The TM lock is a DML lock. It's acquired once for each object that's being changed. The ID1 column identifies the object being modified.

Lock Modes

You can see more information on TM and TX locks just by looking at the lock modes. The LMODE and REQUEST columns both use the same numbering for lock modes, in order of increasing exclusivity: from 0 for no lock, to 6 for exclusive lock. A session must obtain an exclusive TX lock in order to change data; LMODE will be 6. If it can't obtain an exclusive lock because some of the rows it wants to change are locked by another session, then it will request a TX in exclusive mode; LMODE will be 0 since it does not have the lock, and REQUEST will be 6. You can see this interaction in the rows we selected earlier from v$lock:
ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
AF9E2C4C AF9E2C60        479 TX     131078      16739          0          6        685          0
ADEBEA20 ADEBEB3C        422 TX     131078      16739          6          0        697          1
Note that ID1 and ID2 in Session 2, which is requesting the TX lock (LMODE=0, REQUEST=6), point back to the rollback and transaction entries for Session 1. That's what lets us determine the blocking session for Session 2.
You may also see TX locks in mode 4, Shared mode. If a block containing rows to be changed doesn't have any interested transaction list (ITL) entries left, then the session acquires a TX lock in mode 4 while waiting for an ITL entry. If you see contention for TX-4 locks on an object, you probably need to increase INITRANS for the object.
TM locks are generally requested and acquired in modes 3, aka Shared-Row Exclusive, and 6. DDL requires a TM Exclusive lock. (Note that CREATE TABLE doesn't require a TM lock -- it doesn't need to lock any objects, because the object in question doesn't exist yet!) DML requires a Shared-Row Exclusive lock. So, in the rows we selected earlier from v$lock, you can see from the TM locking levels that these are DML locks:
ADDR     KADDR           SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
ADDF7EC8 ADDF7EE0        422 TM      88519          0          3          0        697          0
ADDF7F74 ADDF7F8C        479 TM      88519          0          3          0        685          0

Identifying the locked object

Now that we know that each TM row points to a locked object, we can use ID1 to identify the object.
SQL> select object_name from dba_objects where object_id=88519 ;

OBJECT_NAME
--------------
TSTLOCK
Sometimes just knowing the object is enough information; but we can dig even deeper. We can identify not just the object, but the block and even the row in the block that Session 2 is waiting on.

Identifying the locked row

We can get this information from v$session by looking at the v$session entry for the blocked session:
SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
  2* from v$session where sid=479 ;

ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
------------- -------------- --------------- -------------
        88519             16          171309             0
This gives us the object ID, the relative file number, the block in the datafile, and the row in the block that the session is waiting on. If that list of data sounds familiar, it's because those are the four components of an extended ROWID. We can build the row's actual extended ROWID from these components using the DBMS_ROWID package. The ROWID_CREATE function takes these arguments and returns the ROWID:
SQL> select do.object_name,
  2  row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
  3  dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
  4  from v$session s, dba_objects do
  5  where sid=543
  6  and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;

OBJECT_NAME     ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW# DBMS_ROWID.ROWID_C
--------------- ------------- -------------- --------------- ------------- ------------------
TSTLOCK                 88519             16          171309             0 AAAVnHAAQAAAp0tAAA
And, of course, this lets us inspect the row directly.
SQL> select * from tstlock where rowid='AAAVnHAAQAAAp0tAAA' ;

FOO BAR
--- ---
1   a

Conclusion

We've seen how to identify a blocking session, and how to inspect the very row that the waiting session is waiting for. And, I hope, learned a bit about v$lock in the process.