How to set up dynamic secrets for Postgres using Vault and Spring Boot on Kubernetes

Martin Hodges
19 min readMay 5, 2024

In this article I look at how to add dynamic Postgres credentials to a Spring Boot application using the Hashicorp Vault secrets manager.

Using Vault for secrets management

Recently, I created a skeleton Spring Boot application on a Kind Kubernetes cluster. It is on this code base that I will show how to use Vault dynamic secrets as Postgres credentials. You can find the code for that application and all the configuration files in this article in this GitHub repository.

Importance of credential management

Your database is arguably the most important system in your infrastructure. It holds all your data and that of your users and customers. It is the system that unauthorised people want to gain access to.

It should be clear that you need to carefully and securely control access to your database.

One method to do this is to frequently rotate the credentials (username and passwords) that your applications use to access it. This means that if any credentials leak, they can only be used for a short time before they are invalidated.

This article looks at how credentials can be rotated automatically with Kubernetes, Vault, Postgres and a Spring Boot application.

How it works

Vault has a database secrets engine that allows it to manage the users that have access to a database.

In Postgres, users are referred to as roles but I will refer to them in this article as users to avoid the confusion with other roles in the solution.

When your application wants to connect to the database, it asks Vault for the username and password. Vault, checks its secrets cache and, when it does not find the credentials for the database, it creates a user in the database, places the username and password in its cache and then returns them to the application.

Next time the application asks for the credentials, Vault finds them in the cache. So long as they have not exceeded their maximum Time To Live (TTL), they are returned. If they have expired, new credentials are created in the database and the old ones deleted. The cached credentials are replaced with the new ones.

This seems ideal. The credentials are automatically rotated without the application knowing.

Unfortunately, when it comes to integrating Spring Boot applications to Vault, the Spring Cloud Vault framework does not work as you might expect. When the credentials expire, they are not rotated until the application is restarted. This is done deliberately to ensure that changes in credentials do not break any of the connection pools and active transactions.

Restarting an application to rotate the credentials is a blunt tool and is not something I recommend.

Alternate solution

So, if we don’t want to use Spring Cloud Vault, there is an alternative solution.

We can use a Vault Agent to obtain credentials on behalf of our application and then insert them into the application.

The Vault Agent is a sidecar to the main application. This means it is in its own, second container within the Pod. It fetches the credentials from Vault, handling the renewals after they expire, as you would expect.

Once it has the credentials, it inserts them as a file into the main application using a volume that is mounted into both, its container and the Spring Boot application’s container.

The application can now read the credentials dynamically and use them to access the database.

This is what we will do.

Set up

There are four parts to this solution that we need to set up, as shown in this diagram:

Using Vault Agent in Kubernetes
  1. The Vault Agent (acting on behalf of our application) needs to request a secret from the Vault. To do this, it needs a valid Vault token that has the correct policy associated with it (myapp-db-policy).
  2. To get a token, Vault Agent must authenticate with Vault using its Kubernetes credentials (JWT) that it gets from its Kubernetes ServiceAccount (myapp-sa). This gives it a Vault access token with the right access policies.
  3. Vault needs to be able to validate the Vault Agent’s credentials by asking Kubernetes via its API. To do this, Vault needs its own credentials.
  4. Like the Vault Agent, Vault gets its credentials from its own ServiceAccount, which was created when Vault was installed. (This is done (almost) automatically for us because we are running Kubernetes in the same cluster as Vault Agent. If we are running it externally or in another cluster, additional configuration would be required)

Now Vault Agent has access to Vault and can retrieve the database secrets, a new flow occurs.

Using Vault database credentials
  1. Vault Agent requests the database credentials from Vault using the previously acquired Vault access token, which is associated with the myapp-db-policy.
  2. Vault, realising it does not have the required credentials (or that they have expired), asks Postgres to create new credentials using the statements from the myapp-db-role. It connects to the database using the myapp-db-cnx connection.
  3. On creating the credentials, it passes them back to the Vault Agent which then stores them in the shared, in-memory, volume.
  4. Our Spring Boot application polls the credential’s file and updates its datasource when there is a change.
  5. When the application wants to access the database it does so, using the credentials that were last created by Vault.

You may be thinking that there is a timing race between credentials expiring and new ones being created. You would be right but the timing is such that there is a grace period where old and new credentials will work. Your application’s polling of the credentials file must be frequent enough to ensure it updates its connections before the old credentials expire.

You can probably see that there is a lot to do!

We need to set up:

  1. Vault to manage Postgres credentials
  2. Vault to authenticate against Kubernetes
  3. Spring Boot application to use rotating database credentials
  4. Deployment of our application to make it all work

Configuring Vault

A quick note about managing Vault configurations. There are three options for configuring Vault:

  1. Through its user interface/console
  2. Through its Command Line Interface (CLI)
  3. Through its API

Any configuration process should be repeatable and quick. This rules out the use of the console. Although it is useful to ensure your configuration is working as expected, it requires a human to point and click without mistakes.

Using the CLI can lead to very long and unmanageable command lines. These cannot be scripted as the Vault image we are using does not allow us to write configuration files to its filesystem. The CLI is useful in that adding the --output-curl-string option gives you the curl command lines to use with the API. Note that this option has to be positioned earlier than you expect on the command line and not at the end.

This leaves the API, which turns out to be very useful and allows me to add files to my GitHub repository that you can use from your development machine’s command line.

No surprise then that in this article, I have used the API using curl commands and configuration files. For this we need the Vault access token.

Note that I do my development on macOS on Apple silicon and the comands shows here are written for this development machine.

Set an environment variable based on the access token you received when you installed Vault:

export VAULT_TOKEN=<ROOT_TOKEN>

This now means you can (largely) cut and paste the commands I have given you below.

1. Vault to manage Postgres credentials

The Vault must be set up with:

  1. A database engine to manage database credentials (myapp-db)
  2. A connection to our database (myapp-db-cnx)
  3. A database role that creates and allows rotation of our credentials (myapp-db-role)

Enabling the database engine

For Vault to help us manage our database credentials, we need to enable a database engine.

When you enable a secrets engine in Vault, you need to do so at a mount point. In this article, I have chosen myapp-db as the mount point. To do this, we first set up the configuration for the engine.

k8s/vault/enable-db-engine.json

{
"type":"database",
"description":"Database engine for myapp",
"config":{
"options":null,
"default_lease_ttl":"1h",
"max_lease_ttl":"24h",
"force_no_cache":false
},
"local":false,
"seal_wrap":false,
"external_entropy_access":false,
"options":null
}

This sets a default Time To Live (TTL) of 1 hour, before which any credentials must be refreshed or else they will expire. It also sets a default maximum TTL of 24 hours from the point of creation. After this time period, the credentials can no longer be refreshed and need to be recreated. These defaults can be overridden, as we shall see later.

Now we enable the engine:

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/sys/mounts/myapp-db -d @k8s/vault/enable-db-engine.json

Ok, we now have a database engine set up. If you have access to the Vault console, you should be able to see your engine in the Secrets Engine section of the User Interface (UI).

Connecting to our database

Vault will need to connect to our database in order to create and destroy user credentials. It does this using a connection that is configured within our database engine.

Again, we create a file with the configuration for connecting to the database. If you are following along, you should have a Postgres cluster set up in your Kubernetes cluster with a database called myapp. This is the database we want to connect to.

k8s/vault/myapp-db-cnx.json

{
"plugin_name": "postgresql-database-plugin",
"allowed_roles": "myapp-db-role",
"connection_url": "postgresql://{{username}}:{{password}}@db-cluster-rw.pg.svc:5432/myapp",
"max_open_connections": 5,
"max_connection_lifetime": "5s",
"username": <CREATE_USER_USERNAME>,
"password": <CREATE_USER_PASSWORD>
}

Remember, this connection is only used for creating users. This is why the connection lifetime is so short and that the number of connections is low. We also only allow this connection to create users based on myapp-db-role.

Also notice that we set the connection string to the internal DNS name for the database cluster read/write service (db-cluster-rw.pg.svc). We do not define the cluster name as this allows us to use these configurations in different clusters.

The {{username}} and {{password}} fields are templates used by Vault to know where to add in the credentials it creates. We address the myapp database specifically.

You will see that we have two fields that we have not yet defined, <CREATE_USER_USERNAME> and <CREATE_USER_PASSWORD>. For development, you could use the postgres superuser but I prefer to get into the habit that I am configuring a production environment. That way, when I come to production, I know what I am doing.

Given this, we will create credentials for Vault to use. We will create a user called create_users.

Find your database Pod with:

kubectl get pods -n pg

Then get a command line for the database and enter the Postgres CLI (change db-cluster-1 to the name of your database Pod):

kubectl exec -it db-cluster-1 -n pg -- psql

In Postgres users are roles that have the login permission. In our case, we need the user to be able to login and so we will create a user. Use the following (replacing < > field with your values):

create user create_users with password '<my super-secret password>' createrole;
grant connect on database myapp to create_users;

We can now use these credentials in our connection file.

Remember not to commit your credentials into a code repository, such as GitHub.

Apply this to connection configuration to the Vault database engine we enabled earlier with:

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/myapp-db/config/myapp-db-cnx -d @k8s/vault/myapp-db-cnx.json

We are calling this connection myapp-db-cnx. If you have any problems with your database permissions or credentials, the connection will not be made and you will see the error returned by the database.

This step is almost complete. From within the UI, you should be able to see the connection. The last part is to secure our connection user.

If you are using the postgres superuser DO NOT rotate the password as you may lose access to your database.

Right now, the password for create_users is probably accessible through the configuration. One of the benefits of using Vault is that you can now get it to rotate this user’s password and then no one but Vault has access to it — not even you!

In the UI, find your connection and then click the Rotate root credentials button. Once rotated, the connection is complete and ready for use.

Telling Vault how to create a user

Even though the Vault database engine knows how to create a user in a Postgres database, we still have to tell it how to set up our users. This is because it does not know how we want our users to be set up, especially when it comes to permissions. We tell it how to by giving it an SQL command template to use.

The Vault database engine uses a system of database roles to tell it how to create a user with the correct permissions. It is this database role that holds the SQL command template.

Let’s create a role called myapp-db-role, which matches the allowed role we configured in the engine earlier.

First create a configuration file that will define our Vault role:

k8s/vault/myapp-db-role.json

{
"db_name": "myapp-db-cnx",
"creation_statements": "CREATE ROLE \"{{name}}\" WITH LOGIN INHERIT PASSWORD '{{password}}' IN ROLE \"app-user\" VALID UNTIL '{{expiration}}';,ALTER USER \"{{name}}\" SET ROLE = \"app-user\";",
"default_ttl": "10m",
"max_ttl": "1h"
}

You can see here that we:

  • Reference our database connection, myapp-db-cnx.
  • Grant the user we are creating all permissions on our schema, myapp, within the database and ensure that anything created with this user has its owner set to app-user as we do not want objects owned by transient users.
  • Overwrite the default TTL and maximum TTL parameters.

You should refine the creation statements to suit your needs. I would recommend that, in a production environment, you may not want to allow your user to delete any data but that gets into a deeper discussion for another day.

Now use this file to configure the Vault database engine:

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/myapp-db/roles/myapp-db-role -d @k8s/vault/myapp-db-role.json

You should now be able to use the console to find this role and generate new credentials for it.

We do not need to give the engine a configuration for deleting the user as it knows how to do this. There is no application specific information in deletion. However, Vault allows you to define the instructions, in case you have special requirements.

You should be able to login using a database client and do your operations under the leased credentials. For 10 minutes anyway!

That now completes our Vault and database integration.

2. Vault and Kubernetes integration

There are two authentication processes involved with this solution. They are based around the need for:

  • Vault Agent to request secrets from Vault
  • Vault to ask Kubernetes to verify who the Vault Agent is

In the first case, acting as a proxy for the application, the Vault Agent needs to be able authenticate itself with Vault on behalf of the application.

In the second case, Vault needs to be able to authenticate itself with Kubernetes so it can ask Kubernetes to authenticate the Vault Agent authentication request.

Vault Agent access to Vault

Vault kubernetes authentication engine

We will first set up the authentication between Vault and Kubernetes. It is important to remember that it is actually the kubernetes authentication engine within Vault that needs to be able to access Kubernetes as it proxies authentication requests from the Vault Agent through to Kubernetes.

First, we enable the kubernetes authentication engine. To do this we need to create a configuration file:

k8s/vault/enable-k8s-engine.json

{
"type":"kubernetes",
"description":"Authentication engine for authenticating pods against a ServiceAccount",
"config":{
"options":null,
"default_lease_ttl":"0s",
"max_lease_ttl":"0s",
"force_no_cache":false
},
"local":false,
"seal_wrap":false,
"external_entropy_access":false,
"options":null
}

We will mount it at its default location (auth/kubernetes).

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/sys/auth/kubernetes -d @k8s/vault/enable-k8s-engine.json

Once we have enabled the kubernetes engine, we need to configure it. Running Vault in Kubernetes allows it to extract some of the values it needs from the files created and maintained by Kubelet as part of Kubernetes. However, you still need to tell it where to find the Kubernetes API. Create the file:

k8s/vault/vault-k8s-config.json

{
"kubernetes_host": "https://kubernetes.default.svc.cluster.local:443"
}

Then add the configuration to the kubernetes engine:

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/auth/kubernetes/config -d @k8s/vault/vault-k8s-config.json 

We have now set up the kubernetes authentication engine but we still need to do more configuration.

Vault Agent

Whilst the kubernetes authentication engine can now proxy authentication requests to Kubernetes, it needs to be configured to allow the Vault Agent to access the database secret.

We need to connect the application’s ServiceAccount to a Vault secret and to do that we need to provide a policy that governs the access.

First, let’s create the policy. The policy we want to create is given by:

path "myapp-db/creds/myapp-db-role" {
capabilities = ["read"]
}

This allows a token associated with this policy to read the database credentials in our database engine mounted at myapp-db.

Let’s convert this into a JSON payload that we can use with curl. Create this file:

k8s/vault/myapp-db-policy.json

{
"policy":"path \"myapp-db/creds/myapp-db-role\" {\n capabilities = [\"read\"]\n}"
}

Now create the policy:

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/sys/policies/acl/myapp-db-policy -d @k8s/vault/myapp-db-policy.json

This has created a policy that allows us to read the database credentials. We now have to associate this with the kubernetes authentication engine so that, when authenticated, the token that is returned includes this policy.

Again, we create a configuration file:

k8s/vault/myapp-k8s-role.json

{
"bound_service_account_names": "myapp-sa",
"bound_service_account_namespaces": "default",
"policies": "myapp-db-policy",
"ttl": "1h"
}

This allows a Pod (ie: the Vault Agent) to authenticate against Kubernetes via the Vault kubernetes engine. In our case, the engine authenticates the Vault Agent using the myapp-sa ServiceAccount within the default namespace. Once authenticated, the token the application (Vault Agent) receives is bound to the myapp-db-policy access policy and has a TTL of 1 hour.

We then add this role to the Vault kubernetes engine:

curl -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" http://localhost:31400/v1/auth/kubernetes/role/myapp-k8s-role -d @k8s/vault/myapp-k8s-role.json

Before we leave this configuration step, it is useful to create the Kubernetes ServiceAccount that we referenced above (“bound_service_account_names”: “myapp-sa”).

We do this through Kubernetes, which needs a manifest file:

k8s/myapp-service-account.yml

apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: myapp-tokenreview-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: default

We now create the ServiceAccount:

kubectl apply -f k8s/myapp-service-account.yml

This completes the Vault Kubernetes integration step. Next, we need to make the changes to our Spring Boot application.

3. Spring Boot Application

We have arrived at the point where we need to set up our Spring Boot application to utilise the Vault connection. If you have been following my set of articles, you should have a skeleton Spring Boot application with a REST API backed by a Postgres database. So far, I have described the standalone, connected and k8s-debug profiles. We now turn our attention to the local-cluster profile. In this profile, we will use the Vault Agent to inject secrets as described earlier.

Just to recap. Vault Agent acts as a proxy for our Spring Boot application and requests the database credentials from the Vault itself. The Agent handles renewals and expiries automatically.

Managing database secrets using Vault Agent

The Vault Agent then injects the credentials it receives into our application by way of an in-memory volume that is mounted into our application Pod. From the application’s perspective, it finds the credentials it needs in a file in its file system.

With the configuration described in this article, the application receives its credentials through a file called /vault/secrets/myapp-db.

First, we add a scheduled task to our application that periodically checks the file for updates. When it sees a change, it then applies the new credentials to our data source, which, in our skeleton, is the default Hikari data source. You can see how we set this up this from this snippet (go to the file link to see the whole file):

config.DatabaseDynamicCredentialsJob

@Service
@Slf4j
@RequiredArgsConstructor
@ConditionalOnProperty(prefix="application", name="dynamic-db-config.enabled", havingValue = "true")
public class DatabaseDynamicCredentialsJob {

@Value("$(application.dynamic-db-config.enabled:false)")
boolean dynamicDbEnabled;

@Value("$(application.dynamic-db-config.filename)")
String dynamicDbCredentialsFilename;

private final HikariDataSource hikariDataSource;

@Scheduled(
fixedDelayString = "$(application.dynamic-db-config.refresh:5)",
timeUnit = TimeUnit.MINUTES
)
public void checkForRefreshedCredentials() throws IOException {
...
}
}

From the snippet, you can see that we set this up as a Service that is only created when the application.dynamic-db-config.enabled property is set to true. This allows us to link this to our local-cluster profile, avoiding the overwrite of our credentials under other profiles.

You can also see that the task is Scheduled to trigger at a default rate of 5 minutes. This can be adjusted with the application.dynamic-db-config.refresh property. It should be at least twice as fast as the TTL for the secret created by the Vault database engine.

There is also a property that defines the name of the file that the Vault Agent will inject (application.dynamic-db-config.filename). These properties need to be set in the application-local-cluster.yml (defaults are set in application.yml which disables this refresh mechanism by default).

In the scheduled task, the file is read and compared to the current username and password. If there is a change, the current username and password is replaced with the revised values. The data source is then given the new credentials to use and the connection pool is rotated using the soft eviction method. This ensures that transactions are completed before the connection is closed. This is shown in this snippet:

config.DatabaseDynamicCredentialsJob

...
boolean refreshed = !StringUtils.equals(username, hikariDataSource.getUsername()) ||
!StringUtils.equals(password, hikariDataSource.getPassword());

if (refreshed) {
log.info("Updating database credentials");
hikariDataSource.setUsername(username);
hikariDataSource.setPassword(password);
hikariDataSource.getHikariPoolMXBean().softEvictConnections();
}
...

We also need to configure the credential injection code properties. Add the following snippet to your local-cluster profile Spring Boot configuration:

k8s/application-local-cluster.yml

...
application:
dynamic-db-config:
enabled: true
filename: "/vault/secrets/myapp-db"
refresh: 5
...

This completes step 3 and we now have a Spring Boot application that will automatically update its connections when the database credentials are rotated automatically by Vault.

4. Deploying our application

At this stage we now have:

  • Dynamic database credentials that the Vault database engine maintains and rotates
  • A Vault kubernetes engine that is able to authenticate against Kubernetes so that it can validate the Vault Agent credentials
  • A Vault policy that allows an appropriate Vault access token to access our database credentials
  • The ability for a Vault Agent associated with the appropriate ServiceAccount to access the database credentials
  • A Spring Boot application that is able to read, use and update credentials injected by the Vault Agent.

There is just one step left and that is to deploy our Spring Boot application such that it includes Vault Agent.

You may remember that we created a ServiceAccount that allows an application to access the database secrets in Vault. We called this myapp-sa and we need to ensure out application is associated with this in the deployment manifest.

If you have not got a local-cluster-deployment.yml file, you can create it by copying the k8s-debug.yml file from the earlier article.

In the file, you can delete the STATIC_DB_USERNAME and STATIC_DB_PASSWORD variables in the env stanza that sets up the secrets for the database, as we will no longer be using these. Change the SPRING_PROFILES_ACTIVE to local-cluster.

Then, in the template stanza that defines what we are deploying, we add the following annotations stanza.

Note that Vault uses a Kubernetes feature that allows it to be informed about Pod creations which, if correctly annotated, allows Vault to create additional containers within the Pod being created. We do not need to define these as Vault does it automatically behind the scenes. But we do need to give it the right annotations to make it happen.

k8s/local-cluster-deployment.yml

...
template:
metadata:
labels:
app: sb-k8s-template
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "myapp-k8s-role"
vault.hashicorp.com/agent-inject-secret-myapp-db: "myapp-db/creds/myapp-db-role"
vault.hashicorp.com/agent-inject-file-secret-myapp-db: "myapp-db.creds"
vault.hashicorp.com/auth-path: "auth/kubernetes"
vault.hashicorp.com/agent-run-as-user: "1881"
vault.hashicorp.com/agent-pre-populate: "true"
vault.hashicorp.com/agent-pre-populate-only: "false"...

These annotations do the following:

  • agent-inject: tells Vault to create the sidecar to inject the secrets
  • role: the role Vault provides to the access token when the Agent is authenticated
  • agent-inject-secret-xxx: this is the secret to be injected by Vault (xxx) and its path
  • agent-inject-file-secret-xxx: the path to the secrets file the Vault Agent is going to create for the secret xxx
  • auth-path: this is the path to the Vault authentication engine to use
  • agent-run-as-user: ensures the Vault Agent container does not run as the root user
  • agent-pre-populate: by setting this true, it ensures that our database credentials are available before our application runs (Vault uses an init container to do this)
  • agent-pre-populate-only: by setting this false, it ensures that our credentials continue to be updated and not just on start-up (Vault uses a sidecar container to do this)

You can find out more about these annotations here.

This sees the completion of the deployment manifest updates but before we deploy the application, there is a change we need to make to our Docker image.

Dockerfile updates

If you have been following along, you will have a Docker/Dockerfile.k8s.debug file that you use to create your Docker image. This one creates an image that allows your application to be debugged. As we are moving towards a production ready build, we need to create a new version of this file without remote debugging enabled:

Docker/Dockerfile.local.cluster

FROM openjdk:17.0.2-slim-buster
RUN addgroup --system spring && useradd --system spring -g spring
USER spring:spring
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
EXPOSE 8080 8081

After creating your JAR file with the new configuration files, you now need to create your Docker image with:

docker build -t sb-k8s-template:01 -f Docker/Dockerfile.local.cluster .

Now upload it to your Kind cluster:

kind load docker-image sb-k8s-template:01

Finally deploy it to the cluster using our new deployment manifest:

kubectl apply -f k8s/local-cluster-deployment.yml

Trying it out

After deploying the application, check the logs to ensure the application is up and running.

You can then try adding fishes and fish tanks as explained in this article.

If you want to see the rotation in process, you could reduce the TTL on the database connection to a few minutes, remembering to increase the frequency at which the application is looking for changes.

If anything goes wrong

I have added a WHEN_THINGS_GO_WRONG.md file to the repository to provide some pointers I found useful as I was putting this project together.

Summary

Hopefully you managed to get through this longer than normal article. Unfortunately, there were a large number of activities that were required to enable dynamic credentials in our Spring Boot — Kubernetes — Vault — Postgres solution and all need to be done before it can work.

Just to recap, in this article we:

  • Configured Vault to manage dynamic credentials on our database
  • Configured Vault to authenticate applications against Kubernetes
  • Secured our Vault configuration using policies
  • Modified our application to accept rotating database credentials
  • Modified our application configuration and deployment to utilise the Vault Agent to act as a proxy for our application

I hope you enjoyed this article and learned at least one thing from it.

If you found this article of interest, please give me a clap as that helps me identify what people find useful and what future articles I should write. If you have any suggestions, please add them as notes or responses.

--

--