Unix Timestamp Converter
Convert between Unix timestamps and human-readable date/time.
About this tool
Unix time, also called Epoch time or POSIX time, is a system for representing points in time as a single integer — the number of seconds elapsed since midnight UTC on January 1, 1970, known as the Unix Epoch. This representation is timezone-independent, requires no calendar logic to store, and is trivially comparable and sortable. It appears in database records, log files, API responses, JWT claims (exp, iat, nbf), file metadata, and virtually every system that deals with time.
JavaScript and many modern APIs use millisecond-precision timestamps (13-digit integers, e.g., Date.now()). Traditional Unix commands (date +%s), POSIX APIs, and many databases use second-precision (10-digit integers). Some high-performance systems use microseconds (16-digit) or nanoseconds (19-digit). When debugging, the number of digits is usually the quickest way to identify the precision: 10 digits is seconds, 13 is milliseconds, 16 is microseconds.
Treating a millisecond timestamp as a second-precision timestamp is one of the most common timestamp bugs — it produces dates in the year 2001 to 2286 range instead of 1970–2001, depending on the specific value. Similarly, treating a second-precision timestamp as milliseconds produces dates in 1970 that are just fractions of a second into the Epoch. Always verify precision by checking the date the timestamp decodes to.
Unix timestamps are stored as 32-bit signed integers in many legacy C and embedded systems. A 32-bit signed integer can represent values up to 2,147,483,647, which corresponds to 03:14:07 UTC on January 19, 2038. After this moment, a 32-bit timestamp overflows to a negative number representing a date in 1901. This is the Year 2038 problem. All modern software should use 64-bit integers for timestamps, which can represent dates roughly 292 billion years into the future.
In databases, timestamp handling varies significantly. PostgreSQL's TIMESTAMP WITH TIME ZONE (TIMESTAMPTZ) stores values in UTC and converts to the session timezone on display. MySQL's TIMESTAMP also stores in UTC but is limited to 2038; DATETIME does not store timezone information. MongoDB uses 64-bit millisecond timestamps in BSON ISODate. Redis stores Unix timestamps as plain integers in sorted sets, enabling efficient time-range queries. Always document what timezone and precision your stored timestamps use.