Contents
Intro
In this blog post, we’ll be talking about what exactly Phoenix LiveDashboard [1] is, what you gain by using it, and my opinions on when/how you should use Phoenix LiveDashboard. Finally, we’ll set up a Phoenix application that makes use of Phoenix LiveDashboard and we’ll also put together a simple load testing script to exercise the application a bit. Without further ado, let’s dive right into things!
What is Phoenix LiveDashboard?
For those who have been following my blog for a while, you can probably tell that application/system observability is something that is near and dear to my heart. Having the ability to know when something is going wrong and more importantly what is going wrong can mean the difference between losing and keeping your customers. I hate to be the bearer of bad news, but production issues can and will happen. Production bugs and outages are an inevitability. You can mitigate a lot of production problems via high quality tests, static analysis, and automated deployment pipelines (see Getting your Elixir application ready for CI/CD for more in depth information regarding that topic), but when production issues arise, you’ll want to have some observability solutions in place to help you figure out what is going on.
LiveDashboard is a tool that you can use to easily get insight into your running Elixir+Phoenix application. You can capture information related to the BEAM, Phoenix, logs, and even your own metrics that you publish. The best thing about LiveDashboard is that it is all a part of your application and requires running no additional services/servers. Not to mention it has a very slick UI!
As its name implies, LiveDashboard is built upon LiveView. For those unfamiliar with LiveView, I would suggest watching Chris McCord’s ElixirConf EU 2019 Keynote presentation [6]. At a high level, LiveView allows you to create interactive frontend applications using little to no Javascript. The state of your frontend is controlled and stored by your Phoenix application. As a result, your frontend is optimally rerendered only when your backend state changes with changes being pushed to the client via WebSocket.
When should you use Phoenix LiveDashboard?
While LiveDashboard is an amazing tool, it is important to understand when you should/shouldn’t use it and whether or not it replaces any of your existing tools. To be clear, LiveDashboard does not replace your ELK or Prometheus+Grafana stacks. Those are purpose-built logging/monitoring tools with purpose built databases to store the data that you can query (see Structured logging in Elixir using Loki and Prometheus, PostGIS and Phoenix Part 1 for details on those topics). LiveDashboard on the other hand, does not actually persist any timeseries data or index any log messages. Instead, its focus is on presenting real-time logs and metrics to you whilst using the tool. When you close LiveDashboard, all the data captured during the duration of your session is gone. With expectations set on what LiveDashboard is/isn’t…when would I reach for a tool like this?
Personally, I see LiveDashboard serving two very important use cases:
- Local development - During local development, you may or may not have access to a full ELK or Prometheus+Grafana stack. Having LiveDashboard is an extremely lightweight alternative that you can use to achieve similar results. You’ll be able to plot your Telemetry [7] metrics over time, get information on the BEAM, and capture all of your application logs (all during the duration of your LiveDashboard session).
- Deployed Environment - In a deployed context, I can still see LiveDashboard providing value (even though I personally have not yet deployed it anywhere). I think it fits into the same category of tools such as observer_cli [2], recon [3], and remote IEX sessions using Mix Releases [4] or Distillery [5]. These are all tools that I incorperate into my project, on the off chance that I need them to debug or introspect a running production application. One of the beauties of the BEAM is how introspectable it is, even in a production context. All these tools incur minimal to no runtime performance penelty and can be invaluable in pinpointing production issues. I see LiveDashboard as being another tool that I can quickly open up, see what’s going on, and continue my investigation. Of course, you need to ensure that you LiveDashboard is properly secured and not visible to the public, but that is out of scope for this article ([8] describes how you can set up basic auth to prevent prying eyes from seeing your LiveDashboard instance).
Show me the code!
In order to to see Phoenix LiveDashboard in action, we’ll be setting up a real world Phoenix application that is backed by Postgres. Our application will expose a single endpoint that we can leverage to search for used cars using some basic query filters. We will also hook up some Telemetry events so that we can get our own metrics to appear on via LiveDashboard.
Step 1: Create a new Phoenix project - commit
To begin, let’s install the Phoenix project generator. If you have the luxury of starting a greenfield project, I would suggest fetching the latest Phoenix project generator as it will automatically setup LiveView and LiveDashboard for you. At the time of this writing, Phoenix 1.5.1 is the latest release so we will use that. Run the following to install the Phoenix project generator:
$ mix archive.install hex phx_new 1.5.1
With the Phoenix project generator installed, let’s go ahead and create a new project:
$ mix phx.new auto_finder_livedashboard
With that done, you are ready to start adding your business logic (what an awesome developer experience if I do say so myself)! If you are working with an existing project that does not have LiveView and LiveDashboard set up, I would suggest looking to the official documentation for guidance:
Step 2: Adding our business logic - commit
Let’s start off by creating an Ecto migration via mix ecto.gen.migration used_cars
and adding the following contents
to the generated migration file (if you feel ambitious, you can add indices for the various searchable fields :)):
defmodule AutoFinderLivedashboard.Repo.Migrations.UsedCars do
use Ecto.Migration
def change do
create table(:used_cars) do
add :make, :string
add :model, :string
add :year, :integer
add :mileage, :integer
add :price, :integer
timestamps()
end
end
end
With our migration in place, let’s also update our priv/repo/seeds.exs
script so that we can seed our database with
dummy data:
alias AutoFinderLivedashboard.{Repo, UsedCars.UsedCar}
car_selection = [
{"Acura", ~w(ILX TLX RLX RDX MDX NSX), 15_000..35_000},
{"Honda", ~w(Accord Civic CR-V Odyssey Passport), 10_000..25_000},
{"Nissan", ~w(GT-R 370Z Titan Leaf Sentra), 25_000..50_000},
{"Mazda", ~w(MX-5 CX-3 CX5 CX-9), 15_000..25_000},
{"Chevrolet", ~w(Camaro Corvette Colorado Silverado), 25_000..50_000},
{"Ford", ~w(Escape Explorer Mustang Focus), 15_000..25_000},
{"Audi", ~w(A4 Q3 A6 Q7 R8 S3 S4 RS5), 20_000..50_000},
{"BMW", ~w(M2 M3 M5 X4 X7), 20_000..50_000},
{"Subaru", ~w(Impreza Legacy Forester BRZ WRX), 15_000..25_000},
{"Porsche", ~w(Taycan Panamera MAcan Cayenne Carrera Cayman), 40_000..70_000},
{"Ferrari", ~w(812 F8 488 GTC4 Portofino), 150_000..250_000}
]
1..1_000
|> Enum.each(fn _ ->
{make, models, price_range} = Enum.random(car_selection)
model = Enum.random(models)
price = Enum.random(price_range)
year = Enum.random(2015..2020)
mileage = Enum.random(10_000..60_000)
%UsedCar{}
|> UsedCar.changeset(%{make: make, model: model, price: price, year: year, mileage: mileage})
|> Repo.insert!()
end)
You’ll probably notice that we made a call to a changeset and referenced a struct that we have not yet implemented.
Let’s go ahead and add those now. Create a file lib/auto_finder_livedashboard/used_cars/used_car.ex
with the following
contents:
defmodule AutoFinderLivedashboard.UsedCars.UsedCar do
use Ecto.Schema
import Ecto.Changeset
@fields ~w(make model year mileage price)a
@derive {Jason.Encoder, only: @fields}
schema "used_cars" do
field :make, :string
field :model, :string
field :year, :integer
field :mileage, :integer
field :price, :integer
timestamps()
end
def changeset(used_car, attrs \\ %{}) do
used_car
|> cast(attrs, @fields)
|> validate_required(@fields)
end
end
We’ll also want to create a module that will act as the entry point into our “used car” context. Let’s create a file
lib/auto_finder_livedashboard/used_cars/used_cars.ex
with the following contents:
defmodule AutoFinderLivedashboard.UsedCars do
import Ecto.Query
alias AutoFinderLivedashboard.{Repo, UsedCars.UsedCar}
@event_name [:auto_finder_livedashboard, :query]
def get_used_cars(query_params) do
base_query = from(used_car in UsedCar)
query_params
|> Enum.reduce(base_query, &handle_query_param/2)
|> Repo.all()
end
defp handle_query_param({"make", make}, acc_query) do
:telemetry.execute(@event_name, %{count: 1}, %{filter: "make"})
from used_car in acc_query, where: ilike(used_car.make, ^make)
end
defp handle_query_param({"model", model}, acc_query) do
:telemetry.execute(@event_name, %{count: 1}, %{filter: "model"})
from used_car in acc_query, where: ilike(used_car.model, ^model)
end
defp handle_query_param({"min_year", min_year}, acc_query) do
:telemetry.execute(@event_name, %{count: 1}, %{filter: "min_year"})
from used_car in acc_query, where: used_car.year >= ^min_year
end
defp handle_query_param({"max_price", max_price}, acc_query) do
:telemetry.execute(@event_name, %{count: 1}, %{filter: "max_price"})
from used_car in acc_query, where: used_car.price <= ^max_price
end
defp handle_query_param({"max_mileage", max_mileage}, acc_query) do
:telemetry.execute(@event_name, %{count: 1}, %{filter: "max_mileage"})
from used_car in acc_query, where: used_car.mileage <= ^max_mileage
end
end
Our get_used_cars/1
function will be called from our controller and will be used to dynamically build the user’s
query. The function reduces on all of the user’s search options, appending to the query additional where
clauses. Once
the query is built, a call to Repo.all/1
is made to fetch all the used cars that match the users search terms. As a
side note, I find this pattern of dynamically building queries very flexible, easy to test, and clean from a readers
perspective :).
Another thing you will probably notice are all the calls to :telemetry.execute/3
. This function is used to emit custom
telemetry events [7] which are then handled by the event’s registered handlers. Handlers to :telemetry.execute/3
events are registered via :telemetry.attach/4
and :telemetry.attach_many/4
. We won’t be directly attaching our own
handlers as Telemetry Metrics [9] takes care of this for us. All we need to do is tell Telemetry Metrics what kind of
data we want to derive from the given event. In this case, we want to capture metrics on what query filters user’s most
often use. With event being emitted in lib/auto_finder_livedashboard/used_cars/used_cars.ex
, let’s open up
lib/auto_finder_livedashboard_web/telemetry.ex
and add metrics data points. You’ll notice that there are already a
fair amount of data points being catpured in your metrics/0
function; These were generated as part of the Phoenix
project generator. To add our own, add the following to your metrics/0
function:
def metrics do
[
...
# Application metrics
counter("auto_finder_livedashboard.query.count", tags: [:filter])
]
end
The tags
list let’s us create a counter for each value of filter. In our case, the cardinality of :filter
is 5 as
there are only 5 possible values for :filter
. Caution should be taken here to ensure that the cardinality of your
tags is not unbounded as this is an anti-pattern in a lot of time series databases and there are serious implications
for unbounded tag values (often called labels as well).
With our business functionality and metrics in place, let’s create a new controller at
lib/auto_finder_livedashboard_web/controllers/used_car_controller.ex
and add the following contents (for the purposes
of this tutorial we’ll omit input validation…but you should always do that in production):
defmodule AutoFinderLivedashboardWeb.UsedCarController do
use AutoFinderLivedashboardWeb, :controller
alias AutoFinderLivedashboard.UsedCars
require Logger
def index(conn, params) do
results = UsedCars.get_used_cars(params)
json(conn, results)
end
end
With our controller in place, all that is left is to update our router.ex
file so that we can hit our controller. Open
up lib/auto_finder_livedashboard_web/router.ex
and ensure that it looks like the following:
defmodule AutoFinderLivedashboardWeb.Router do
use AutoFinderLivedashboardWeb, :router
...
pipeline :api do
plug :accepts, ["json"]
end
scope "/api", AutoFinderLivedashboardWeb do
pipe_through :api
get "/used_cars", UsedCarController, :index
end
...
end
Lastly, if you want to enable operating system metrics, you’ll want to add the :os_mon
application to your
extra_applications
list in your mix.exs
file:
def application do
[
mod: {AutoFinderLivedashboard.Application, []},
extra_applications: [:logger, :runtime_tools, :os_mon]
]
end
With all that done, let’s get to work on setting up our load tester so we can see our graphs and logs dance!
Step 3: Writing a load tester - commit
In order to put some load on our system, we will be writing a simple load test Elixir script. Our exs
script will not
require any external dependencies as we will leverage OTP’s built in :httpc
HTTP client. With that said, create a file
called load_test.exs
at the root of your project with the following contents:
base_url = "http://localhost:4000/api/used_cars"
wait_time_per_query_ms = 100
total_requests = 100
:ok = :inets.start()
Enum.each(1..total_requests, fn count ->
random_num = :rand.uniform(10)
url =
cond do
random_num <= 5 ->
"#{base_url}?make=ferrari"
random_num <= 7 ->
"#{base_url}?model=F8"
random_num == 8 ->
"#{base_url}?min_year=1990"
random_num == 9 ->
"#{base_url}?max_price=200000"
true ->
"#{base_url}?max_mileage=50000"
end
:httpc.request(:get, {String.to_charlist(url), []}, [], [])
if rem(count, 10) == 0, do: IO.puts("Completed #{count} requests")
:timer.sleep(wait_time_per_query_ms)
end)
Let’s walk through and break down our load tester so that it makes sense. At the top we define a few constants that we
leverage throughout the load test. Prior to making any HTTP requests, we need to start the Inets
Erlang service and
ensure that it returns an :ok
atom. After that, we leverage Enum.each/2
to go through our range which defines the
total number of requests that will be made. We then have a simple cond
statement to simulate an affinity towards users
searching for cars via make
. After than we make the request, and make the process sleep at the end so that the graph
doesn’t just immediately spike…we want the animation to be pleasant to watch :).
Step 4: Taking it all for a test drive
With all the code in place we are now ready to give this all a test drive. Before starting up our Phoenix application, we’ll want to run Postgres inside of a container along side the app. Usually, I would opt for a Docker Compose set up with some mounted volumes, but we’ll keep it simple this time around and just have an ephemeral container. In one terminal, run the following to start the database:
$ docker run -p 5432:5432 -e POSTGRES_PASSWORD=postgres postgres:12
And in another terminal run the following to get the Phoenix application up and running:
$ mix deps.get
$ mix ecto.setup
$ npm install --prefix assets
$ mix phx.server
Once the server has started, feel free to navigate to http://localhost:4000/dashboard
to see LiveDashboard running.
With the LiveDashboard up and running, we can now execute our load tester and see how our application reacts all in
real-time.
In yet another terminal, run the following and navigate to
http://localhost:4000/dashboard/nonode%40nohost/metrics/auto_finder_livedashboard
:
elixir load_test.exs
With your browser open and pointed to the LiveDashboard URL, you should see something like the following:
Closing thoughts
Well done and thanks for sticking with me to the end! We covered quite a lot of ground and hopefully you picked up a couple of cool tips and tricks along the way. To recap, we leveraged the latest Phoenix project generator to create a new Phoenix application with LiveView and LiveDashboard baked right in. We added some real world functionality to our application and tracked the usage of our API via telemetry events. Finally, we wrote a simple load tester to exercise our API and saw that our LiveDashboard metrics were updating our graphs!
Feel free to leave comments or feedback or even what you would like to see in the next tutorial. Till next time!
Additional Resources
Below are some additional resources if you would like to deep dive into any of the topics covered in the post.
- [1] https://github.com/phoenixframework/phoenix_live_dashboard
- [2] https://github.com/zhongwencool/observer_cli
- [3] https://github.com/ferd/recon/
- [4] https://hexdocs.pm/mix/Mix.Tasks.Release.html
- [5] https://github.com/bitwalker/distillery
- [6] https://www.youtube.com/watch?v=8xJzHq8ru0M
- [7] https://github.com/beam-telemetry/telemetry
- [8] https://hexdocs.pm/phoenix_live_dashboard/Phoenix.LiveDashboard.html#module-extra-add-dashboard-access-on-all-environments-including-production
- [9] https://hexdocs.pm/telemetry_metrics/Telemetry.Metrics.html
comments powered by Disqus