Migrating to Biff, a self-hosted Firebase alternative for Clojure
5 May 2020

Home / Blog / Current

Biff, like Firebase, is a web framework and a deployment solution. It shares some of Firebase's core features and is intended to make web development with Clojure extremely easy. Biff is targeted towards early-stage startups and hobby projects first, but over time I'd like it to become a serious option for apps that need scale.

I started writing Biff about six weeks ago. Last week I finished moving Findka to it, away from Firebase (and AWS before that). It's still pre-release quality: I need to add a few more features, clean up the code, and write a lot of documentation. But since I've reached the milestone of running my own startup on Biff, I thought I'd give a preview of its current features with examples of how I'm using them.

(For any regular readers: I'm going to resume my Solo Hacker's Guide To Clojure project after Biff is released. It'll become a Getting Started guide for Biff.)


I liked Firebase, but I prefer a long-running JVM + Clojure backend to an ephemeral Node + ClojureScript backend. I also felt that several parts of Firebase would be better if they were re-implemented with Clojure components. For example, I found Firebase's security rules to be error-prone and hard to debug, and I've replaced them with a Spec-based version.

(I'm also very picky and wanted something that was completely under my control.)

Frameworks vs. Libraries

Frameworks are not inherently bad, they're just hard to get right. Instead of trying to make Biff do everything for everyone, I'm making it easy to take apart. It should be easy to e.g. use 80% of Biff and replace the remaining 20% with your own components (without forking Biff!). I'm also focusing on providing lots of configuration options for overriding default behaviors.

Table of Contents

Feel free to jump to whichever sections interest you most.

Installation and deployment

Biff runs completely locally during development. Just add Biff to your project dependencies and run clj -m biff.core (or similar). For production, I'm running it on DigitalOcean. I have an install script for their Ubuntu image, and eventually I'd like to provide a 1-click install. Besides installing Biff, the script configures the firewall, sets up Nginx, and installs certificates via Let's Encrypt.

A $5 droplet is all you need if you don't mind filesystem persistence, but I'm also using managed Postgres (as a backend for Crux).

Biff uses tools.deps' git dependency feature for deployment. For example, here's the deps.edn file for my production instance of Biff:

  {:git/url "",
   :sha "2f653e0846bf0661e5b0640589f9b371f8c53bca"},
  {:git/url "",
   :sha "499bd51179f8ca8056769984ebff7ea2267bce28",
   :deps/root "biff"}}}

To upgrade Findka and/or Biff, I:

  1. Push to Github
  2. Run git rev-parse HEAD to get the latest sha
  3. SSH to the server and update the shas in deps.edn
  4. systemctl restart biff

The Biff service adds a deploy key for Findka to the SSH agent (since it's a private repo) and then runs clj -m biff.core. Later I'll include an admin console, analogous to the Firebase console. It'll have a UI for deploys, deploy keys and rollbacks so you don't have to do it from the command line. I'll add an option for deploy-on-push also.

These deploys currently result in down time. I'm planning to make Biff work with an additional droplet and a load balancer so it can avoid that. But at Findka's current scale, this isn't important for me yet.

Database: Crux

(Compare to Cloud Firestore)

Crux provides the core features of immutability, convenient modeling of graph data, and datalog queries. It's also easy to self-host since it can run in the same JVM process as your app.

Biff will set up a Crux node with (by default) JDBC persistence in production and filesystem persistence via RocksDB in development. You just need to provide your JDBC connection parameters (or set it to use RocksDB in production).

Not much else to say here, but see the next three sections for what Biff adds on top of Crux.

Subscribable queries

(Compare to Firestore realtime updates)

Biff allows you to subscribe to Crux queries from the frontend with one major caveat: cross-entity joins are not allowed (Firebase also has this restriction). Basically, this means all the where clauses in the query have to be for the same entity.

; OK
'{:find [doc]
  :where [[doc :foo 1]
          [doc :bar "hey"]]}

; Not OK
'{:find [doc]
  :where [[user :name "Tilly"]
          [doc :user user]]}

So to be clear, Biff's subscribable "queries" are not datalog at all. They're just predicates that can take advantage of Crux's indices. Biff makes this restriction so that it can provide query updates to clients efficiently without having to solve a hard research problem first. However, it turns out that we can go quite far even with this restriction.

On the frontend, Biff provides some code that initializes a websocket connection and handles query subscriptions for you:

(def default-subscriptions
  #{[:biff/sub '{:table :users
                 :where [[:name "Ben"]
                         [:age age]
                         [(<= 18 age)]
                         [(yourapp.core/likes-cheese? doc)]]}]})

(def subscriptions (atom default-subscriptions))
(def sub-data (atom {}))

  {:subscriptions subscriptions
   :sub-data sub-data

If you want to subscribe to a query, swap! it into subscriptions. If you want to unsubscribe, swap! it out. Biff will populate sub-data with the results of your queries and remove old data when you unsubscribe. You can then use the contents of that atom to drive your UI. The contents of sub-data is a map of the form subscription->doc-id->doc, for example:

{[:biff/sub '{:table :users
              :where ...}]
 {{:user/id #uuid "some-uuid"} {:name "Sven"
                                :age 250

Note the subscription format again:

[:biff/sub '{:table :users
             :where [[:name "Ben"]
                     [:age age]
                     [(<= 18 age)]
                     [(yourapp.core/likes-cheese? doc)]]}]

The first keyword is a Sente event ID. Biff provides an event handler for :biff/sub. You can provide your own subscription sources by changing the event ID. You'll have to register an event handler on the backend that handles subscribes, unsubscribes, and notifying subscribed clients when data changes.

The actual query map omits the entity variable in the where clauses since it has to be the same for each clause anyway. But it will be bound to doc in case you want to use it in e.g. a predicate function. :find is similarly omitted. The :table value is connected to authorization rules which you define on the backend (see the next section) . When a client subscribes to this query, it will be rejected unless you define rules for that table which allow the query. You also have to whitelist any predicate function calls (like yourapp.core/likes-cheese?), though the comparison operators (like <=) are whitelisted for you.

All this is most powerful when you make the subscriptions atom a derivation of sub-data. Here's a snippet from Findka (which uses Rum, though Rum isn't required):

(defonce db (atom {...}))

; same as (do (rum.core/cursor-in db [:sub-data]) ...)
(defcursors db
  sub-data [:sub-data]

; same as (do
;           (rum.core/derived-atom [sub-data] :findka.client.db/data
;             (fn [sub-data]
;               (apply merge-with merge (vals sub-data))))
;           ...)
(defderivations [db sub-data ...] findka.client.db

  data (apply merge-with merge (vals sub-data))

  id->item (:items data)
  events (vals (:events data))
  item-ids (->> events

  uid (get-in data [:uid nil :uid])
  user-ref {:user/id uid}
  self (get-in data [:users user-ref])
  email (:user/email self)
  signed-in (and (some? uid) (not= :signed-out uid))

  biff-subs [; :uid is a special non-Crux query. Biff will respond
             ; with the currently authenticated user's ID.
             (when signed-in
               {:table :events
                :where [[:event-type]
                        [:user user-ref]]})
             (when signed-in
               ; You can subscribe to individual documents too
               {:table :users
                :id user-ref})
             (for [id item-ids]
               {:table :items
                :id id})]
  subscriptions (->> biff-subs
                  (filter some?)
                  (map #(vector :biff/sub %))

For background: Findka is a recommender system for any kind of content (books, movies, etc.). An "item" is one of those content items. It contains at least a URL but often also a title, an image, and a short text description. An "event" could be one of five things:

  • A "pick", where the user picks the item using the search bar.
  • A "recommend", where Findka's algorithm recommends an item to the user.
  • A "like", where the user hits thumbs-up on an item.
  • A "dislike", which is the opposite.
  • A "meh", where the user unlikes or undislikes an item, returning it to neutral.

Each event contains the ID of the relevant item, which can then be used to fetch its metadata.

When an authenticated user goes to Findka, the following will happen:

  1. Client subscribes to :uid (i.e. subscriptions contains #{[:biff/sub :uid]}).
  2. sub-data is populated with the user's ID.
  3. signed-in changes to true and biff-subs gets updated. The client is now subscribed to the current user's events and user info (which includes things like their email address).
  4. sub-data is populated with more data. The UI will display the user's email address. Using events, item-ids gets populated with a list of the IDs for all items that should be displayed in the user's feed right now.
  5. The client subscribes to the metadata for those items. When it arrives, the UI will display the items along with their current thumbs-up/down values.

This is what I meant before when I said that we can go pretty far without cross-entity joins: using this method, we can declaratively load all the relevant data and perform joins on the client. This should be sufficient for many situations.

However, it won't work if you need an aggregation of a set of documents that's too large to send to the client. To handle that, I'd like to try integrating Materialize. But it's not a priority for me yet.

Implementation notes

There's lots of work left to do. In particular, the subscription notifying code has some race conditions I need to fix. However I see no reason why the general approach wouldn't be scalable. Biff watches Crux's transaction log, and after each transaction it gets a list of changed documents with their values before and after the transaction. It then goes through the list of subscriptions and finds out which ones were affected (an easy, efficient operation thanks to our no-cross-entity-joins restriction).

Also, the client currently doesn't do optimistic writes. As a result, Findka responds slightly sluggishly when you hit thumbs up/thumbs down (it's not terrible though; the whole round trip doesn't take very long). I'll add optimistic writes soon, automatically rolling them back if the transaction fails.

Read/write authorization rules

(Compare to Firebase security rules)

Clients can send arbitrary subscriptions and transactions to the backend, but they must pass authorization rules which you define. Here are most of Findka's rules:

; Same as (do (s/def ...) ...)
  ::provider-id (s/or
                  :artist+title (s/tuple string? string?)
                  :other (some-fn int? string?))
  ::provider keyword?
  ::content-type #{:book :music ...}
  ::timestamp inst?
  ::event-type #{:pick :like :dislike :meh :recommend}
  ::item-id (u/only-keys :req-un [::content-type ::provider ::provider-id])
  :user/id uuid?
  :ref/user (u/only-keys :req [:user/id])
  ::event (u/only-keys
            :req-un [::provider-id
  ::show-modal boolean?
  ::admin boolean?
  ::tester boolean?
  ::unsubscribed boolean?
  ::user (u/only-keys
           :req [:user/email]
           :opt-un [::show-modal

(def rules
  {:events {:spec [uuid?    ; first: spec for the document ID
                   ::event] ; second: spec for the document
            :create (fn [{:keys [current-time auth-uid generated-id]
                          {:keys [timestamp user event-type]} :doc}]
                        (= auth-uid (:user/id user))
                        (= current-time timestamp)
                        (not= :recommend event-type)
            :query (fn [{:keys [auth-uid]
                         {:keys [user]} :doc}]
                     (= (:user/id user) auth-uid))}
   :picks {:spec [uuid? (s/and ::event #(= :pick (:event-type %)))]}
   :items {:spec [::item-id any?]
           :get (constantly true)}
   :users {:spec [:ref/user ::user]
           :get (fn [{:keys [auth-uid]
                      {:keys [user/id]} :doc}]
                  (= auth-uid id))
           :update (fn [{:keys [doc old-doc auth-uid]}]
                       (= auth-uid (:user/id doc))
                       (apply = (map #(dissoc % :show-modal) [doc old-doc]))))}})

The keys of rules are "tables." In SQL databases, tables are defined in the database layer (same for Firebase, but they call tables "collections"). In Biff, tables are defined by the value of :spec: the table is the set of documents which satisfy the spec predicates. Whenever a client reads or writes data, it specifies which table each document belongs to. The backend verifies that the documents satisfy the specs, and then it verifies that the operation passes the relevant authorization function (:create, :update, :delete, :query, or :get).

For example, a user may create a new event document as long as:

  • The document has exactly the keys :provider-id, :provider, :content-type, :event-type, :timestamp, and :user (and the values satisy their respective specs).
  • The document ID is a UUID.
  • The :user value is set to the authenticated user's ID.
  • The :timestamp value was set by the server when it received the transaction, not set by the client.
  • The event is a :pick, :like, :dislike, or :meh event.
  • The document ID was randomly generated by the server, not set by the client.

Queries are described in the previous section. A transaction looks like this:

  [:biff/tx {[:events]
             {:content-type :music
              :provider :lastfm
              :provider-id ["Breaking Benjamin"
                            "Give Me A Sign"]
              :event-type :pick
              :timestamp :db/current-time
              :user @findka.client.db/user-ref}

             [:users @findka.client.db/user-ref]
             {:db/update true
              :show-modal false}}])

The transaction is a map from idents to documents. The first element of an ident is a table. The second element, if present, is a document ID. If omitted, it means we're creating a new document and we want the server to set the ID to a random UUID.


  • :db/current-time is replaced by the server with the current time.
  • If :db/update is true, the given document will be merged with an existing document, failing if the document doesn't exist.
  • There's also :db/merge which simply creates the document if it doesn't exist (i.e. upsert).
  • You can delete documents by setting them to nil.

The transaction doesn't actually have to be a map. If you want to create multiple documents in the same table with random IDs, use a nested vector instead:

[:biff/tx [[[:events] {:content-type :book
           [[:events] {:content-type :movie

Database triggers

(Compare to Firestore triggers)

Triggers let you run code in response to document writes. You must define a map of table->operation->fn, for example:

(defn fetch-metadata [{:keys [biff/submit-tx doc db] :as env}]
  ; same as (let [item-id (delay ...)
  ;               ...]
  ;           ...)
  (trident.util/letdelay [item-id (findka.util/event->item-id doc)
                          item (fetch-metadata* env item-id)]
    (when (and (not (crux/entity db item-id)) item)
        (assoc env
          :tx [[:crux.tx/put
                  {:crux.db/id item-id}

(def triggers
  {:picks {:create fetch-metadata}})

When a user adds a new content item to Findka, this trigger fetches its metadata. See the previous section for the definition of the :picks table.


(Compare to Firebase Authentication)

If you're OK with email link authentication (i.e. the user clicks a link in an email to sign in), Biff will handle it for you (otherwise you can roll your own authentication). Biff provides a set of HTTP endpoints for this. For example, when you click the Get Started button on Findka's home page, Findka sends a POST request to /api/signup with an email parameter. In a configuration file, we specify a function that Biff can use to send email. When Biff receives the POST request, it creates a new user, generates a link with a JWT that will authenticate the user, and passes it along with a :template parameter to the email-sending function. Here's Findka's email function:

(def templates
   (fn [{:keys [biff.auth/link to]}]
     {:subject "Sign in to Findka"
      :html (rum.core/render-static-markup
               [:p "We received a request to sign in to Findka using this email address."]
               [:p [:a {:href link} "Click here to sign in."]]
               [:p "If you did not request this link, you can ignore this email."]])})
   :biff.auth/signup ...
   :recommend ...})

(defn send-email* [api-key opts]
  (http/post (str "")
    {:basic-auth ["api" api-key]
     :form-params (assoc opts :from "Findka <>")}))

(defn send-email [{:keys [to text template data api-key] :as opts}]
  (if (some? template)
    (if-some [template-fn (get templates template)]
      (send-email* api-key
        (assoc (template-fn (assoc data :to to)) :to to))
      (biff.util/anom :incorrect "Email template not found."
        :template template))
    (send-email* api-key (select-keys opts [:to :subject :text :html]))))

(Findka passes in the API key through a closure; Biff doesn't set it).

The emailed link goes to another HTTP endpoint that Biff defines. It validates the JWT and then sets a session cookie. There are also endpoints for signing in an existing user, signing out, and checking if the current user is authenticated (I use the last one for redirecting on various pages).

I'd like to support password and SSO authentication in the future, but it's not a priority yet since email link authentication works well for Findka.

Client/server communication

(Compare to Calling Firebase functions)

Biff sets up an Immutant web server with Reitit for routing. It also initializes a Sente connection. Biff applies some default middleware for you, but this is overridable.

So all you have to do is provide a Reitit route object and a Sente event handler. Biff's default middleware will include system resources (such as a Crux connection) with the requests/events.

On the frontend, Biff provides a function for sending Sente events. For HTTP endpoints, you can just use an HTTP client like cljs-http directly. For the latter, if you're calling an endpoint that requires authentication and you're using Biff authentication, you'll have to include Biff's CSRF token in the X-CSRF-Token header. The token is stored in a csrf cookie:

(defn csrf []
  (js/decodeURIComponent (.get (new js/document) "csrf")))

(If you're making the request with a form, you can set the hidden csrf form field—either on the server for SSR pages or with JS for static pages.)

Note that for many CRUD operations, Biff's read/write authorization rules will allow you to submit queries and transactions directly from the frontend, so you can avoid a proliferation of endpoints. Findka currently has only three HTTP endpoints and one event handler (not including Biff's endpoints and handlers).

Static resources

(Compare to Firebase Hosting)

Biff will copy your static resources to a www/ directory. In production, www/ is a symlink to /var/www/ and is served directly by Nginx. In development, the JVM process will serve files from that directory.

Biff looks for static resources in two places. First, there's a :biff/static-pages configuration option which you can set to a map from paths to Rum data structures. For example, Findka's looks like this:

(require '[findka.static.util :as util])

(def app
  (util/base-page {:scripts [[:script {:src "/cljs/app/main.js"}]]
                   :show-footer false}
    [:#app util/loading]))

(def signup-success
    [:h3 "Signup successful"]
    [:p "Please check your inbox to confirm your email address."]))

(def not-found
    [:h3 "Page not found"]
    [:p "Try " [:a {:href "/"} "the home page"] " instead."]))


(def pages
  {"/" home
   "/app/" app
   "/signup-success/" signup-success
   "/404.html" not-found
   "/some-page/" [:html ...]

Biff will export these pages to HTML on startup.

Second, Biff will look for resources in www/your-app-ns/ on the classpath. Here's a subset of Findka's resources directory:

└── www
    └── findka.biff
        ├── cljs
        │   └── app
        │       └── main.js
        ├── css
        │   └── bootstrap.css
        ├── favicon-16x16.png
        ├── favicon.ico
        ├── img
        │   └── demo.gif
        └── js
            └── ensure-signed-out.js

As you can see, I currently commit generated resources (except for HTML files, but including CLJS compilation output) to the git repo. However you can easily add initialization code to your app that instead generates the resources (or downloads them from a CI server) during deployment.

I'd like to add a CDN integration eventually.

Plugins and config

When you run clj -m biff.core, Biff searches the classpath for plugins and then starts them in a certain order. To define a plugin, you must set ^:biff metadata on a namespace and then set a components var to a list of plugin objects in that namespace. Biff comes with three plugins:

(ns ^:biff biff.core
    [biff.system :refer [start-biff]]
    [biff.util :as bu]


(def core
  {:name :biff/core
   :start (fn [sys]
            (let [env (keyword (or (System/getenv "BIFF_ENV") :prod))
                  {:biff.core/keys [start-nrepl nrepl-port instrument]
                   :or {nrepl-port 7888} :as sys} (merge sys (bu/get-config env))]

(def console
  {:name :biff/console
   :requires [:biff/core]
   :required-by [:biff/web-server]
   :start (fn [sys]
            (-> sys
              (merge #:console.biff.auth{:on-signin "/"
                                         :on-signin-request "/biff/signin-request"
                                         :on-signin-fail "/biff/signin-fail"
                                         :on-signout "/biff/signin"})
              (start-biff 'console.biff)))})

(def web-server
  {:name :biff/web-server
   :requires [:biff/core]
   :start (fn [{:biff.web/keys [host->handler port] :as sys}]
            (let [server (imm/run
                           #(if-some [handler (get host->handler (:server-name %))]
                              (handler %)
                              {:status 404
                               :body "Not found."
                               :headers {"Content-Type" "text/plain"}})
                           {:port port})]
              (update sys :trident.system/stop conj #(imm/stop server))))})

(def components [core console web-server])

Biff uses the :requires and :required-by values to start plugins in the right order. You can think of plugins kind of like Ring middleware. They receive a system map and return a modified version of it.

The :biff/core plugin reads your configuration options from a config.edn file and merges it into the system. The :biff/console plugin starts a Biff instance for administering your app. Right now it's not actually used for anything, but later I'll use it do serve a web admin console which you can use for things like deploying your app. (It'll be like the Firebase console).

This is also a good place for internal apps, custom console extensions and any data you'd like to keep separate from your app's database. For example, the text and metadata of this article are stored in the :biff/console database. I have another Biff plugin which writes the Findka blog posts to and also crossposts them to (I serve my personal website from Biff too).

Finally, the :biff/web-server starts an Immutant web server which can be shared by all plugins. For example, start-biff, among other things, returns (update sys :biff.web/host->handler assoc host handler).

Here's an exceprt from Findka's plugin:

(ns ^:biff findka.core

(defn start-findka [{:keys [findka.mailgun/api-key] :as sys}]
  (-> sys
      #:findka.biff.auth{:send-email #(send-email (assoc % :api-key api-key))
                         :on-signup "/signup-success/"
                         :on-signin-request "/signin/sent/"
                         :on-signin-fail "/expired/"
                         :on-signin "/app/"
                         :on-signout "/"}
      #:findka.biff{:fn-whitelist '[map?]
                    :event-handler #(ws-handlers/api % (:?data %))
                    :triggers triggers/triggers
                    :rules schema/rules
                    :routes [http-handlers/route]
                    :static-pages static/pages})
    (start-biff 'findka.biff)))

(def components
  [{:name :findka/core
    :requires [:biff/core]
    :required-by [:biff/web-server]
    :start start-findka}
   {:name :findka/blog
    :requires [:biff/console :findka/core]
    :start blog/write-blog}])

The 'findka.biff argument I pass to start-biff is a namespace for Findka's Biff config options. (It's also used to decide where to look for static resources and things like that). Since we have two instances of Biff running, we can set default Biff options under :biff.* that will apply to both the Biff console and Findka, or we can set options for specific Biff instances by putting them under the namespace, e.g. :findka.biff/host "". The return values can also be found under the given namespace by subsequent plugins. For example, my blog plugin gets a Crux connection from :console.biff/node.

Here's Findka's config file:

{:prod {:timbre/level :info

        :biff.crux.jdbc/user "..."
        :biff.crux.jdbc/password "..."
        :biff.crux.jdbc/host "..."
        :biff.crux.jdbc/port ...

        :console.biff/host ""
        :console.biff.crux.jdbc/dbname "biff"

        :findka.biff/host ""
        :findka.biff.crux.jdbc/dbname "findka"
        :findka.mailgun/api-key "..."
        :findka.thetvdb/api-key "..."
        :findka.lastfm/api-key "..." "..."}
 :dev {:inherit [:prod]
       :biff/dev true
       ;:console.biff/host "localhost"
       :findka.biff/host "localhost"
       :timbre/level :debug}}

Since I store secrets in there, I keep this file out of version control. If you want to manage your secrets and/or config in some other way, you can easily add your own Biff plugin and have it run first.

Monitoring and debugging

There's not much provided out of the box right now. You can watch server logs (which includes stdout from your app) with ssh journalctl -u biff -f. You can get some analytics by feeding Nginx's logs to goaccess.

You can tunnel an nrepl connection by running ssh -NL 7888:localhost:7888 Then you can connect to port 7888 from your editor. You can get system resources and config from @biff.core/system. Biff provides some helper functions for stopping and restarting the system. You should be careful with this, of course, but I have found it to be quite wonderful for fixing production bugs quickly.


As mentioned in Plugins and config, you can run multiple Biff instances (each with their own database). At a minimum, this means you can run the Biff console and your own app from the same server. If you're using Biff for side projects, you could easily make a Biff instance for each one. I don't know if there's significant overhead from starting additional Crux nodes though.

Another option is you can write apps that all share the same Biff instance (and thus the same database). In fact, I have a dream where all the apps I want to use are written as Biff plugins running on a server which I control, and all my data is in a single database.

If that kind of thing become common, I think it would have large implications for the software industry. Open-source is predominantly used for building blocks; most web apps are closed-source. But what if all you had to do to distribute an open-source web app was push it to a git repo and then let users—even non-technical people—install it to their own Biff servers?

For example, imagine a music app that stores each user's music in their own personal DigitalOcean object storage (it's only $5/month for 250GB). A Biff plugin could provide a web player and an API for mobile apps. It could have its own open-source recommendation algorithm that picks which songs to play. There could be an integration with Bandcamp so you can easily add new songs to your collection.

This would be great for content publishing and social networking too. All the content you create (articles, photos, videos, tweets, whatever) would be stored on your server. It could then be published in various places: your personal website (hosted on Biff of course), RSS feeds, social networks via their APIs, your personal mailing list, your mom's digital photo frame that has an email address for adding pictures to it...

If this really took off, I predict that commercial software for consumers would be mainly useful in the form of APIs. In the music example, Findka could provide an API that compares your listening history to other Findka users and recommends new music. Or you could have a web email client on Biff that uses Mailgun as the mail server. In all cases, the final applications would be open-source and thus easy to tweak and extend. A dream come true.

That's my dream anyway.

Subscribe to Findka Essays. Enter email: