Five Key Traits of Database Access Administration

Raise your hand if this applies to you:

  • You manually provision user access to tables and views. Routine requests involved creating a new user and assigning roles each time.
  • You have redundant user accounts with the same set of privileges, but for different database objects.
  • You know there’s a password, or another form of shared secret that’s being shared among users.
  • Your audit trail is missing information. What queries were run, who was my user, what data was transacted from client to server?
  • Your database users are not tied to actual identities of team members, i.e. “dev-team” in the logs does not communicate as much as “”.
  • You are concerned about exposing a database endpoint to the outside world, but juggling VPNs is not the best experience in the world.

I’ve sat in on many discovery calls to understand how tech-enabled companies manage access to their databases. What I found was an operational minefield of dormant secrets, tangled grant policies, laborious provisioning procedures, reliance on VPNs, and scant incident detection. To be clear, this is no fault of the administrator, quite the opposite. These seemingly bad design choices exist because they are the optimal solution.

Effective management of database access comes down to five key traits: strong identity-based authentication, secret management, access controls, revocation, and attribution (observability). When each of the five can work without undermining the others, administrators do not have to make sacrifices that become thorny once scaled up. Unfortunately, this is rarely the case with database user administration.

The Five Key Traits

Strong Authentication
Strong authentication cannot fool databases into granting access to an illegitimate user. For the most part, companies incorporate some degree of cryptography and encryption as an added security layer. Challenge-response schemas that employ hashing are popular for this exact reason. More secure methods involve public-private key cryptography. The current state of the art for authentication is short-lived certificates issued based on user identities.

Secret Management
The current paradigm of elastic infrastructure, microservices, and remote work all inevitably lead to passwords, keys, and other forms of shared secrets sprawled over corporate (and personal) devices. This results in the paradoxical scenario where sharing secrets is the safer option. Usage of auto-expiring X.509 certificates removes the need of shared secrets entirely.

Access Controls
Databases can finely tune precise access controls through grants, roles, privileges, etc. The level of intricacy afforded to administrators often results in two extreme implementations: (1) Managing a mountain of privileges across X users and Y objects or (2) Granting a manageable number of roles that overextend privileges to current and future objects. The former approach suffers from excessive complexity and operational overhead, and the latter violates the principle of the least privilege.

Revocation refers to the longevity of a credential that expires after a set period. Revocation is helpful in many cases as people change jobs, passwords get leaked, and computers get left in Ubers.

Attributing activity to a particular user makes auditability and alerting much simpler. Database audit tools are quite mature, but they are limited to capturing the “what” of a session, not the “who.” Furthermore, employees often share credentials, especially within small teams, making it challenging to identify which user did what accurately. LDAP and Active Directory alleviate this pressure in many ways but introduce limitations through weak authentication and secret management.

What Good User Management Looks Like


Instead of users like “alice”, think of identities like “Alice Doe ”. Adopting an identity management platform (IdM) like Okta makes for a great first step for good identity management. The general value proposition is well known: let administrators manage users and permissions in one central directory and let users SSO into authorized web apps.

Consider implementing database access via SSO provided by an identity management platform of your choice.

Strong Authentication
Nearly all IdMs, even free ones like Github, Google Apps, but also Okta or ActiveDirectory support OAuth/OIDC or SAML protocols. Both have robust authentication methods that tie user’s access to their identity and simplify user experience by providing a single authentication flow.

All IdMs allow multi-factor (MFA) authentication with expiring codes or U2F keys further reinforce authentication security.

(Read more on OAuth, OIDC, and SAML.)


Most SSO flows are password-based, making passwords the only credential across all applications that needs safekeeping. Signatures and secrets used in modern identity protocols like SAML and OIDC can be thrown away after use.


Access Controls
Using an IdM as the central identity provider lets applications read a user’s account attributes, like groups, roles, and teams, to inform authorization. Applications can ingest this data quickly due to standardized formatting – attributions for SAML and scopes for OAuth/OIDC.


SAML and OAuth/OIDC have built-in expiration mechanics making revocation automatic. OAuth/OIDC even adds a layer through revocable “refresh tokens” used to obtain new “access tokens.”


Mapping user sessions to identities in something like Okta makes attribution very simple. As long as application auditing is robust, audit trails provide a complete picture of who did what.

Harnessing Teleport for DB Access Administration

While most of the focus was on improving user administration and access for web applications, we started from the bottom of the stack. We built Teleport as an open source, identity-aware, access proxy with an integrated certificate authority. Originally for accessing instances via SSH, we expanded to include Kubernetes clusters, internal web applications, and, as of our most recent update, Teleport 6.0 for databases as well. Our goal is to make it easy for users to access infrastructure resources while maintaining security and compliance measures.

Strong Authentication
Teleport uses X.509 certificates and a certificate authority (CA) to authenticate users. CAs are their own hardened service and use rotating private keys to digitally sign certificates and validate them for authentication. Keygen algorithms like RSA and Ed25519 ensure the strongest encryption possible.


Administrators must only manage the private keys of certificate authorities trusted to digitally sign certificates. Administrators can continually rotate these private keys.

Access Controls
Protocols like SAML and OIDC ferry a user’s attributes from the identity provider to the database via X.509 certificates. Access policies ingest a user’s role, team, group, etc. to grant or deny access to databases.

Expiry can be baked into each certificate, making them disposable.

Identity information transported via certificate is logged on a per-session basis, enriching the audit trail with user attributes.

Teleport consists of just two dependency-free binaries: the teleport daemon runs on servers, and the tsh CLI runs on clients. The server daemon can perform several functions:

  • The Proxy accepts connections from the clients.
  • The Certificate Authority (CA) issues short-lived certificates for clients.
  • Sidecars maintain persistent reverse tunnels to a proxy which allows clients to connect to databases.

The diagram below shows how this all comes together. The teleport daemons are shown as grey rectangles:

  • When a user types tsh login, they trigger a login sequence. The Teleport proxy running on accepts the login request and forwards the client to the company’s identity platform. This could be any identity provider: Google apps, Github, Okta, Active Directory, etc.
  • After the user completes the login process, the Teleport certificate authority (CA) issues an X.509 certificate, filled with the user’s identity metadata (roles, etc) which is returned to the client.
    The tsh client is now aware of all databases available to this user. tsh also configures the user’s command line environment, so psql knows to talk to the proxy.
  • Meanwhile, Teleport’s database service process (shown as “sidecar” in the diagram) is running on the same network as a database. The sidecar establishes an outgoing persistent reverse SSH tunnel to the proxy. These tunnels are how database instances are registered as “online.”
    When a user decides to connect to a specific DB instance, the connection goes from psql to a proxy, then (via an appropriate reverse SSH tunnel) to the corresponding sidecar and from there, via mutual TLS, to the target database instance.
  • Finally, activity is tracked through the Teleport proxy and stored as JSON files, ready for export into any SIEM provider. An example is shown below:

"cluster_name": "root",
"code": "TDN0ZI",
"db_name": "test",
"db_protocol": "postgres",
"db_query" : "selection now();",
"db_service": "local",
"db_uri": "localhost:5432",
"ei": 1,
"event": "db.session.query",
"sid": cc61436b-60b6-4b40-85b1-61976b782441",
"time": "2021-02-17T19:23:49.281Z",
"uid": "29f9f7d5-3122-48a7-8f7a-c2b4775156e9",
"user": "robin"

Get Started with Database Access

Teleport is available as an open-source binary, a fully-hosted cloud offering, or as an enterprise package to run on your own clouds or private data centers. You can sign up for a commercial offering or download an open source version on the getting started page.

If you are curious to learn more about how Teleport provides unified access, the how it works guide is a good place to start and you can dive deeper into the documentation. Finally, if you want us to support other databases, tell us in our community Slack channel!

Virag Mody

Virag joined Teleport in January of 2020, after co-founding a software code auditing company for Ethereum applications. Virag continues to learn about trending technologies and produces high quality content.