applied-llms.org Open in urlscan Pro
2a06:98c1:3120::3  Public Scan

Submitted URL: https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fapplied-llms.org%2F%3Futm_source=tldrnewsletter/1/0100018fdd9b3725-897e8fad-92b9...
Effective URL: https://applied-llms.org/?utm_source=tldrnewsletter
Submission: On June 03 via api from US — Scanned from DE

Form analysis 2 forms found in the DOM

POST https://app.convertkit.com/forms/6652366/subscriptions

<form action="https://app.convertkit.com/forms/6652366/subscriptions" class="seva-form formkit-form" method="post" data-sv-form="6652366" data-uid="b3e2fda9e7" data-format="inline" data-version="5" min-width="400 500 600 700 800">
  <div data-style="clean">
    <ul class="formkit-alert formkit-alert-error" data-element="errors" data-group="alert"></ul>
    <div data-element="fields" data-stacked="false" class="seva-fields formkit-fields">
      <div class="formkit-field"><input type="text" class="formkit-input" name="email_address" style="color:#797979;border-color:#e3e3e3;border-radius:4px;font-weight:400" aria-label="Email Address" placeholder="Email Address" required=""></div>
      <button data-element="submit" class="formkit-submit formkit-submit" style="color:#fff;background-color:#378973;border-radius:4px;font-weight:400">
        <div class="formkit-spinner">
          <div></div>
          <div></div>
          <div></div>
        </div><span class="">Subscribe</span>
      </button>
    </div>
  </div>
  <style>
    .formkit-form[data-uid="b3e2fda9e7"] * {
      box-sizing: border-box;
    }

    .formkit-form[data-uid="b3e2fda9e7"] {
      -webkit-font-smoothing: antialiased;
      -moz-osx-font-smoothing: grayscale;
    }

    .formkit-form[data-uid="b3e2fda9e7"] legend {
      border: none;
      font-size: inherit;
      margin-bottom: 10px;
      padding: 0;
      position: relative;
      display: table;
    }

    .formkit-form[data-uid="b3e2fda9e7"] fieldset {
      border: 0;
      padding: 0.01em 0 0 0;
      margin: 0;
      min-width: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] body:not(:-moz-handler-blocked) fieldset {
      display: table-cell;
    }

    .formkit-form[data-uid="b3e2fda9e7"] h1,
    .formkit-form[data-uid="b3e2fda9e7"] h2,
    .formkit-form[data-uid="b3e2fda9e7"] h3,
    .formkit-form[data-uid="b3e2fda9e7"] h4,
    .formkit-form[data-uid="b3e2fda9e7"] h5,
    .formkit-form[data-uid="b3e2fda9e7"] h6 {
      color: inherit;
      font-size: inherit;
      font-weight: inherit;
    }

    .formkit-form[data-uid="b3e2fda9e7"] h2 {
      font-size: 1.5em;
      margin: 1em 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] h3 {
      font-size: 1.17em;
      margin: 1em 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] p {
      color: inherit;
      font-size: inherit;
      font-weight: inherit;
    }

    .formkit-form[data-uid="b3e2fda9e7"] ol:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] ul:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] blockquote:not([template-default]) {
      text-align: left;
    }

    .formkit-form[data-uid="b3e2fda9e7"] p:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] hr:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] blockquote:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] ol:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] ul:not([template-default]) {
      color: inherit;
      font-style: initial;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .ordered-list,
    .formkit-form[data-uid="b3e2fda9e7"] .unordered-list {
      list-style-position: outside !important;
      padding-left: 1em;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .list-item {
      padding-left: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"][data-format="modal"] {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"][data-format="slide in"] {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"][data-format="sticky bar"] {
      display: none;
    }

    .formkit-sticky-bar .formkit-form[data-uid="b3e2fda9e7"][data-format="sticky bar"] {
      display: block;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-select,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-checkboxes {
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      border: 0;
      border-radius: 5px;
      color: #ffffff;
      cursor: pointer;
      display: inline-block;
      text-align: center;
      font-size: 15px;
      font-weight: 500;
      cursor: pointer;
      margin-bottom: 15px;
      overflow: hidden;
      padding: 0;
      position: relative;
      vertical-align: middle;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:hover,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:hover,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:focus,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:focus {
      outline: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:hover>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:hover>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:focus>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:focus>span {
      background-color: rgba(0, 0, 0, 0.1);
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit>span {
      display: block;
      -webkit-transition: all 300ms ease-in-out;
      transition: all 300ms ease-in-out;
      padding: 12px 24px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input {
      background: #ffffff;
      font-size: 15px;
      padding: 12px;
      border: 1px solid #e3e3e3;
      -webkit-flex: 1 0 auto;
      -ms-flex: 1 0 auto;
      flex: 1 0 auto;
      line-height: 1.4;
      margin: 0;
      -webkit-transition: border-color ease-out 300ms;
      transition: border-color ease-out 300ms;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input:focus {
      outline: none;
      border-color: #1677be;
      -webkit-transition: border-color ease 300ms;
      transition: border-color ease 300ms;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input::-webkit-input-placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input::-moz-placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input:-ms-input-placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input::placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"] {
      position: relative;
      display: inline-block;
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"]::before {
      content: "";
      top: calc(50% - 2.5px);
      right: 10px;
      position: absolute;
      pointer-events: none;
      border-color: #4f4f4f transparent transparent transparent;
      border-style: solid;
      border-width: 6px 6px 0 6px;
      height: 0;
      width: 0;
      z-index: 999;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"] select {
      height: auto;
      width: 100%;
      cursor: pointer;
      color: #333333;
      line-height: 1.4;
      margin-bottom: 0;
      padding: 0 6px;
      -webkit-appearance: none;
      -moz-appearance: none;
      appearance: none;
      font-size: 15px;
      padding: 12px;
      padding-right: 25px;
      border: 1px solid #e3e3e3;
      background: #ffffff;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"] select:focus {
      outline: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] {
      text-align: left;
      margin: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] {
      margin-bottom: 10px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] * {
      cursor: pointer;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"]:last-of-type {
      margin-bottom: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"] {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"]+label::after {
      content: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"]:checked+label::after {
      border-color: #ffffff;
      content: "";
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"]:checked+label::before {
      background: #10bf7a;
      border-color: #10bf7a;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label {
      position: relative;
      display: inline-block;
      padding-left: 28px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::before,
    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::after {
      position: absolute;
      content: "";
      display: inline-block;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::before {
      height: 16px;
      width: 16px;
      border: 1px solid #e3e3e3;
      background: #ffffff;
      left: 0px;
      top: 3px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::after {
      height: 4px;
      width: 8px;
      border-left: 2px solid #4d4d4d;
      border-bottom: 2px solid #4d4d4d;
      -webkit-transform: rotate(-45deg);
      -ms-transform: rotate(-45deg);
      transform: rotate(-45deg);
      left: 4px;
      top: 8px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert {
      background: #f9fafb;
      border: 1px solid #e3e3e3;
      border-radius: 5px;
      -webkit-flex: 1 0 auto;
      -ms-flex: 1 0 auto;
      flex: 1 0 auto;
      list-style: none;
      margin: 25px auto;
      padding: 12px;
      text-align: center;
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert:empty {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert-success {
      background: #d3fbeb;
      border-color: #10bf7a;
      color: #0c905c;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert-error {
      background: #fde8e2;
      border-color: #f2643b;
      color: #ea4110;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner {
      display: -webkit-box;
      display: -webkit-flex;
      display: -ms-flexbox;
      display: flex;
      height: 0px;
      width: 0px;
      margin: 0 auto;
      position: absolute;
      top: 0;
      left: 0;
      right: 0;
      width: 0px;
      overflow: hidden;
      text-align: center;
      -webkit-transition: all 300ms ease-in-out;
      transition: all 300ms ease-in-out;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner>div {
      margin: auto;
      width: 12px;
      height: 12px;
      background-color: #fff;
      opacity: 0.3;
      border-radius: 100%;
      display: inline-block;
      -webkit-animation: formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- 1.4s infinite ease-in-out both;
      animation: formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- 1.4s infinite ease-in-out both;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner>div:nth-child(1) {
      -webkit-animation-delay: -0.32s;
      animation-delay: -0.32s;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner>div:nth-child(2) {
      -webkit-animation-delay: -0.16s;
      animation-delay: -0.16s;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit[data-active] .formkit-spinner {
      opacity: 1;
      height: 100%;
      width: 50px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit[data-active] .formkit-spinner~span {
      opacity: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by[data-active="false"] {
      opacity: 0.35;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit-container {
      display: -webkit-box;
      display: -webkit-flex;
      display: -ms-flexbox;
      display: flex;
      width: 100%;
      margin: 10px 0;
      position: relative;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit-container[data-active="false"] {
      opacity: 0.35;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit {
      -webkit-align-items: center;
      -webkit-box-align: center;
      -ms-flex-align: center;
      align-items: center;
      background-color: #ffffff;
      border: 1px solid #dde2e7;
      border-radius: 4px;
      color: #373f45;
      cursor: pointer;
      display: block;
      height: 36px;
      margin: 0 auto;
      opacity: 0.95;
      padding: 0;
      -webkit-text-decoration: none;
      text-decoration: none;
      text-indent: 100%;
      -webkit-transition: ease-in-out all 200ms;
      transition: ease-in-out all 200ms;
      white-space: nowrap;
      overflow: hidden;
      -webkit-user-select: none;
      -moz-user-select: none;
      -ms-user-select: none;
      user-select: none;
      width: 190px;
      background-repeat: no-repeat;
      background-position: center;
      background-image: url("data:image/svg+xml;charset=utf8,%3Csvg width='162' height='20' viewBox='0 0 162 20' fill='none' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M83.0561 15.2457C86.675 15.2457 89.4722 12.5154 89.4722 9.14749C89.4722 5.99211 86.8443 4.06563 85.1038 4.06563C82.6801 4.06563 80.7373 5.76407 80.4605 8.28551C80.4092 8.75244 80.0387 9.14403 79.5686 9.14069C78.7871 9.13509 77.6507 9.12841 76.9314 9.13092C76.6217 9.13199 76.3658 8.88106 76.381 8.57196C76.4895 6.38513 77.2218 4.3404 78.618 2.76974C80.1695 1.02445 82.4289 0 85.1038 0C89.5979 0 93.8406 4.07791 93.8406 9.14749C93.8406 14.7608 89.1832 19.3113 83.1517 19.3113C78.8502 19.3113 74.5179 16.5041 73.0053 12.5795C72.9999 12.565 72.9986 12.5492 73.0015 12.534C73.0218 12.4179 73.0617 12.3118 73.1011 12.2074C73.1583 12.0555 73.2143 11.907 73.2062 11.7359L73.18 11.1892C73.174 11.0569 73.2075 10.9258 73.2764 10.8127C73.3452 10.6995 73.4463 10.6094 73.5666 10.554L73.7852 10.4523C73.9077 10.3957 74.0148 10.3105 74.0976 10.204C74.1803 10.0974 74.2363 9.97252 74.2608 9.83983C74.3341 9.43894 74.6865 9.14749 75.0979 9.14749C75.7404 9.14749 76.299 9.57412 76.5088 10.1806C77.5188 13.1 79.1245 15.2457 83.0561 15.2457Z' fill='%23373F45'/%3E%3Cpath d='M155.758 6.91365C155.028 6.91365 154.804 6.47916 154.804 5.98857C154.804 5.46997 154.986 5.06348 155.758 5.06348C156.53 5.06348 156.712 5.46997 156.712 5.98857C156.712 6.47905 156.516 6.91365 155.758 6.91365ZM142.441 12.9304V9.32833L141.415 9.32323V8.90392C141.415 8.44719 141.786 8.07758 142.244 8.07986L142.441 8.08095V6.55306L144.082 6.09057V8.08073H145.569V8.50416C145.569 8.61242 145.548 8.71961 145.506 8.81961C145.465 8.91961 145.404 9.01047 145.328 9.08699C145.251 9.16351 145.16 9.2242 145.06 9.26559C144.96 9.30698 144.853 9.32826 144.745 9.32822H144.082V12.7201C144.082 13.2423 144.378 13.4256 144.76 13.4887C145.209 13.5629 145.583 13.888 145.583 14.343V14.9626C144.029 14.9626 142.441 14.8942 142.441 12.9304Z' fill='%23373F45'/%3E%3Cpath d='M110.058 7.92554C108.417 7.88344 106.396 8.92062 106.396 11.5137C106.396 14.0646 108.417 15.0738 110.058 15.0318C111.742 15.0738 113.748 14.0646 113.748 11.5137C113.748 8.92062 111.742 7.88344 110.058 7.92554ZM110.07 13.7586C108.878 13.7586 108.032 12.8905 108.032 11.461C108.032 10.1013 108.878 9.20569 110.071 9.20569C111.263 9.20569 112.101 10.0995 112.101 11.459C112.101 12.8887 111.263 13.7586 110.07 13.7586Z' fill='%23373F45'/%3E%3Cpath d='M118.06 7.94098C119.491 7.94098 120.978 8.33337 120.978 11.1366V14.893H120.063C119.608 14.893 119.238 14.524 119.238 14.0689V10.9965C119.238 9.66506 118.747 9.16047 117.891 9.16047C117.414 9.16047 116.797 9.52486 116.502 9.81915V14.069C116.502 14.1773 116.481 14.2845 116.44 14.3845C116.398 14.4845 116.337 14.5753 116.261 14.6519C116.184 14.7284 116.093 14.7891 115.993 14.8305C115.893 14.8719 115.786 14.8931 115.678 14.8931H114.847V8.10918H115.773C115.932 8.10914 116.087 8.16315 116.212 8.26242C116.337 8.36168 116.424 8.50033 116.46 8.65577C116.881 8.19328 117.428 7.94098 118.06 7.94098ZM122.854 8.09713C123.024 8.09708 123.19 8.1496 123.329 8.2475C123.468 8.34541 123.574 8.48391 123.631 8.64405L125.133 12.8486L126.635 8.64415C126.692 8.48402 126.798 8.34551 126.937 8.2476C127.076 8.1497 127.242 8.09718 127.412 8.09724H128.598L126.152 14.3567C126.091 14.5112 125.986 14.6439 125.849 14.7374C125.711 14.831 125.549 14.881 125.383 14.8809H124.333L121.668 8.09713H122.854Z' fill='%23373F45'/%3E%3Cpath d='M135.085 14.5514C134.566 14.7616 133.513 15.0416 132.418 15.0416C130.496 15.0416 129.024 13.9345 129.024 11.4396C129.024 9.19701 130.451 7.99792 132.191 7.99792C134.338 7.99792 135.254 9.4378 135.158 11.3979C135.139 11.8029 134.786 12.0983 134.38 12.0983H130.679C130.763 13.1916 131.562 13.7662 132.615 13.7662C133.028 13.7662 133.462 13.7452 133.983 13.6481C134.535 13.545 135.085 13.9375 135.085 14.4985V14.5514ZM133.673 10.949C133.785 9.87621 133.061 9.28752 132.191 9.28752C131.321 9.28752 130.734 9.93979 130.679 10.9489L133.673 10.949Z' fill='%23373F45'/%3E%3Cpath d='M137.345 8.11122C137.497 8.11118 137.645 8.16229 137.765 8.25635C137.884 8.35041 137.969 8.48197 138.005 8.62993C138.566 8.20932 139.268 7.94303 139.759 7.94303C139.801 7.94303 140.068 7.94303 140.489 7.99913V8.7265C140.489 9.11748 140.15 9.4147 139.759 9.4147C139.31 9.4147 138.651 9.5829 138.131 9.8773V14.8951H136.462V8.11112L137.345 8.11122ZM156.6 14.0508V8.09104H155.769C155.314 8.09104 154.944 8.45999 154.944 8.9151V14.8748H155.775C156.23 14.8748 156.6 14.5058 156.6 14.0508ZM158.857 12.9447V9.34254H157.749V8.91912C157.749 8.46401 158.118 8.09506 158.574 8.09506H158.857V6.56739L160.499 6.10479V8.09506H161.986V8.51848C161.986 8.97359 161.617 9.34254 161.161 9.34254H160.499V12.7345C160.499 13.2566 160.795 13.44 161.177 13.503C161.626 13.5774 162 13.9024 162 14.3574V14.977C160.446 14.977 158.857 14.9086 158.857 12.9447ZM98.1929 10.1124C98.2033 6.94046 100.598 5.16809 102.895 5.16809C104.171 5.16809 105.342 5.44285 106.304 6.12953L105.914 6.6631C105.654 7.02011 105.16 7.16194 104.749 6.99949C104.169 6.7702 103.622 6.7218 103.215 6.7218C101.335 6.7218 99.9169 7.92849 99.9068 10.1123C99.9169 12.2959 101.335 13.5201 103.215 13.5201C103.622 13.5201 104.169 13.4717 104.749 13.2424C105.16 13.0799 105.654 13.2046 105.914 13.5615L106.304 14.0952C105.342 14.7819 104.171 15.0566 102.895 15.0566C100.598 15.0566 98.2033 13.2842 98.1929 10.1124ZM147.619 5.21768C148.074 5.21768 148.444 5.58663 148.444 6.04174V9.81968L151.82 5.58131C151.897 5.47733 151.997 5.39282 152.112 5.3346C152.227 5.27638 152.355 5.24607 152.484 5.24611H153.984L150.166 10.0615L153.984 14.8749H152.484C152.355 14.8749 152.227 14.8446 152.112 14.7864C151.997 14.7281 151.897 14.6436 151.82 14.5397L148.444 10.3025V14.0508C148.444 14.5059 148.074 14.8749 147.619 14.8749H146.746V5.21768H147.619Z' fill='%23373F45'/%3E%3Cpath d='M0.773438 6.5752H2.68066C3.56543 6.5752 4.2041 6.7041 4.59668 6.96191C4.99219 7.21973 5.18994 7.62695 5.18994 8.18359C5.18994 8.55859 5.09326 8.87061 4.8999 9.11963C4.70654 9.36865 4.42822 9.52539 4.06494 9.58984V9.63379C4.51611 9.71875 4.84717 9.88721 5.05811 10.1392C5.27197 10.3882 5.37891 10.7266 5.37891 11.1543C5.37891 11.7314 5.17676 12.1841 4.77246 12.5122C4.37109 12.8374 3.81152 13 3.09375 13H0.773438V6.5752ZM1.82373 9.22949H2.83447C3.27393 9.22949 3.59473 9.16064 3.79688 9.02295C3.99902 8.88232 4.1001 8.64502 4.1001 8.31104C4.1001 8.00928 3.99023 7.79102 3.77051 7.65625C3.55371 7.52148 3.20801 7.4541 2.7334 7.4541H1.82373V9.22949ZM1.82373 10.082V12.1167H2.93994C3.37939 12.1167 3.71045 12.0332 3.93311 11.8662C4.15869 11.6963 4.27148 11.4297 4.27148 11.0664C4.27148 10.7324 4.15723 10.4849 3.92871 10.3237C3.7002 10.1626 3.35303 10.082 2.88721 10.082H1.82373Z' fill='%23373F45'/%3E%3Cpath d='M13.011 6.5752V10.7324C13.011 11.207 12.9084 11.623 12.7034 11.9805C12.5012 12.335 12.2068 12.6089 11.8201 12.8022C11.4363 12.9927 10.9763 13.0879 10.4402 13.0879C9.6433 13.0879 9.02368 12.877 8.5813 12.4551C8.13892 12.0332 7.91772 11.4531 7.91772 10.7148V6.5752H8.9724V10.6401C8.9724 11.1704 9.09546 11.5615 9.34155 11.8135C9.58765 12.0654 9.96557 12.1914 10.4753 12.1914C11.4656 12.1914 11.9607 11.6714 11.9607 10.6313V6.5752H13.011Z' fill='%23373F45'/%3E%3Cpath d='M15.9146 13V6.5752H16.9649V13H15.9146Z' fill='%23373F45'/%3E%3Cpath d='M19.9255 13V6.5752H20.9758V12.0991H23.696V13H19.9255Z' fill='%23373F45'/%3E%3Cpath d='M28.2828 13H27.2325V7.47607H25.3428V6.5752H30.1724V7.47607H28.2828V13Z' fill='%23373F45'/%3E%3Cpath d='M41.9472 13H40.8046L39.7148 9.16796C39.6679 9.00097 39.6093 8.76074 39.539 8.44727C39.4687 8.13086 39.4262 7.91113 39.4116 7.78809C39.3823 7.97559 39.3339 8.21875 39.2665 8.51758C39.2021 8.81641 39.1479 9.03905 39.1039 9.18554L38.0405 13H36.8979L36.0673 9.7832L35.2236 6.5752H36.2958L37.2143 10.3193C37.3578 10.9199 37.4604 11.4502 37.5219 11.9102C37.5541 11.6611 37.6025 11.3828 37.6669 11.0752C37.7314 10.7676 37.79 10.5186 37.8427 10.3281L38.8886 6.5752H39.9301L41.0024 10.3457C41.1049 10.6943 41.2133 11.2158 41.3276 11.9102C41.3715 11.4912 41.477 10.958 41.644 10.3105L42.558 6.5752H43.6215L41.9472 13Z' fill='%23373F45'/%3E%3Cpath d='M45.7957 13V6.5752H46.846V13H45.7957Z' fill='%23373F45'/%3E%3Cpath d='M52.0258 13H50.9755V7.47607H49.0859V6.5752H53.9155V7.47607H52.0258V13Z' fill='%23373F45'/%3E%3Cpath d='M61.2312 13H60.1765V10.104H57.2146V13H56.1643V6.5752H57.2146V9.20312H60.1765V6.5752H61.2312V13Z' fill='%23373F45'/%3E%3C/svg%3E");
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit:hover,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit:focus {
      background-color: #ffffff;
      -webkit-transform: scale(1.025) perspective(1px);
      -ms-transform: scale(1.025) perspective(1px);
      transform: scale(1.025) perspective(1px);
      opacity: 1;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit[data-variant="dark"],
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit[data-variant="light"] {
      background-color: transparent;
      border-color: transparent;
      width: 166px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit[data-variant="light"] {
      color: #ffffff;
      background-image: url("data:image/svg+xml;charset=utf8,%3Csvg width='162' height='20' viewBox='0 0 162 20' fill='none' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M83.0561 15.2457C86.675 15.2457 89.4722 12.5154 89.4722 9.14749C89.4722 5.99211 86.8443 4.06563 85.1038 4.06563C82.6801 4.06563 80.7373 5.76407 80.4605 8.28551C80.4092 8.75244 80.0387 9.14403 79.5686 9.14069C78.7871 9.13509 77.6507 9.12841 76.9314 9.13092C76.6217 9.13199 76.3658 8.88106 76.381 8.57196C76.4895 6.38513 77.2218 4.3404 78.618 2.76974C80.1695 1.02445 82.4289 0 85.1038 0C89.5979 0 93.8406 4.07791 93.8406 9.14749C93.8406 14.7608 89.1832 19.3113 83.1517 19.3113C78.8502 19.3113 74.5179 16.5041 73.0053 12.5795C72.9999 12.565 72.9986 12.5492 73.0015 12.534C73.0218 12.4179 73.0617 12.3118 73.1011 12.2074C73.1583 12.0555 73.2143 11.907 73.2062 11.7359L73.18 11.1892C73.174 11.0569 73.2075 10.9258 73.2764 10.8127C73.3452 10.6995 73.4463 10.6094 73.5666 10.554L73.7852 10.4523C73.9077 10.3957 74.0148 10.3105 74.0976 10.204C74.1803 10.0974 74.2363 9.97252 74.2608 9.83983C74.3341 9.43894 74.6865 9.14749 75.0979 9.14749C75.7404 9.14749 76.299 9.57412 76.5088 10.1806C77.5188 13.1 79.1245 15.2457 83.0561 15.2457Z' fill='white'/%3E%3Cpath d='M155.758 6.91365C155.028 6.91365 154.804 6.47916 154.804 5.98857C154.804 5.46997 154.986 5.06348 155.758 5.06348C156.53 5.06348 156.712 5.46997 156.712 5.98857C156.712 6.47905 156.516 6.91365 155.758 6.91365ZM142.441 12.9304V9.32833L141.415 9.32323V8.90392C141.415 8.44719 141.786 8.07758 142.244 8.07986L142.441 8.08095V6.55306L144.082 6.09057V8.08073H145.569V8.50416C145.569 8.61242 145.548 8.71961 145.506 8.81961C145.465 8.91961 145.404 9.01047 145.328 9.08699C145.251 9.16351 145.16 9.2242 145.06 9.26559C144.96 9.30698 144.853 9.32826 144.745 9.32822H144.082V12.7201C144.082 13.2423 144.378 13.4256 144.76 13.4887C145.209 13.5629 145.583 13.888 145.583 14.343V14.9626C144.029 14.9626 142.441 14.8942 142.441 12.9304Z' fill='white'/%3E%3Cpath d='M110.058 7.92554C108.417 7.88344 106.396 8.92062 106.396 11.5137C106.396 14.0646 108.417 15.0738 110.058 15.0318C111.742 15.0738 113.748 14.0646 113.748 11.5137C113.748 8.92062 111.742 7.88344 110.058 7.92554ZM110.07 13.7586C108.878 13.7586 108.032 12.8905 108.032 11.461C108.032 10.1013 108.878 9.20569 110.071 9.20569C111.263 9.20569 112.101 10.0995 112.101 11.459C112.101 12.8887 111.263 13.7586 110.07 13.7586Z' fill='white'/%3E%3Cpath d='M118.06 7.94098C119.491 7.94098 120.978 8.33337 120.978 11.1366V14.893H120.063C119.608 14.893 119.238 14.524 119.238 14.0689V10.9965C119.238 9.66506 118.747 9.16047 117.891 9.16047C117.414 9.16047 116.797 9.52486 116.502 9.81915V14.069C116.502 14.1773 116.481 14.2845 116.44 14.3845C116.398 14.4845 116.337 14.5753 116.261 14.6519C116.184 14.7284 116.093 14.7891 115.993 14.8305C115.893 14.8719 115.786 14.8931 115.678 14.8931H114.847V8.10918H115.773C115.932 8.10914 116.087 8.16315 116.212 8.26242C116.337 8.36168 116.424 8.50033 116.46 8.65577C116.881 8.19328 117.428 7.94098 118.06 7.94098ZM122.854 8.09713C123.024 8.09708 123.19 8.1496 123.329 8.2475C123.468 8.34541 123.574 8.48391 123.631 8.64405L125.133 12.8486L126.635 8.64415C126.692 8.48402 126.798 8.34551 126.937 8.2476C127.076 8.1497 127.242 8.09718 127.412 8.09724H128.598L126.152 14.3567C126.091 14.5112 125.986 14.6439 125.849 14.7374C125.711 14.831 125.549 14.881 125.383 14.8809H124.333L121.668 8.09713H122.854Z' fill='white'/%3E%3Cpath d='M135.085 14.5514C134.566 14.7616 133.513 15.0416 132.418 15.0416C130.496 15.0416 129.024 13.9345 129.024 11.4396C129.024 9.19701 130.451 7.99792 132.191 7.99792C134.338 7.99792 135.254 9.4378 135.158 11.3979C135.139 11.8029 134.786 12.0983 134.38 12.0983H130.679C130.763 13.1916 131.562 13.7662 132.615 13.7662C133.028 13.7662 133.462 13.7452 133.983 13.6481C134.535 13.545 135.085 13.9375 135.085 14.4985V14.5514ZM133.673 10.949C133.785 9.87621 133.061 9.28752 132.191 9.28752C131.321 9.28752 130.734 9.93979 130.679 10.9489L133.673 10.949Z' fill='white'/%3E%3Cpath d='M137.345 8.11122C137.497 8.11118 137.645 8.16229 137.765 8.25635C137.884 8.35041 137.969 8.48197 138.005 8.62993C138.566 8.20932 139.268 7.94303 139.759 7.94303C139.801 7.94303 140.068 7.94303 140.489 7.99913V8.7265C140.489 9.11748 140.15 9.4147 139.759 9.4147C139.31 9.4147 138.651 9.5829 138.131 9.8773V14.8951H136.462V8.11112L137.345 8.11122ZM156.6 14.0508V8.09104H155.769C155.314 8.09104 154.944 8.45999 154.944 8.9151V14.8748H155.775C156.23 14.8748 156.6 14.5058 156.6 14.0508ZM158.857 12.9447V9.34254H157.749V8.91912C157.749 8.46401 158.118 8.09506 158.574 8.09506H158.857V6.56739L160.499 6.10479V8.09506H161.986V8.51848C161.986 8.97359 161.617 9.34254 161.161 9.34254H160.499V12.7345C160.499 13.2566 160.795 13.44 161.177 13.503C161.626 13.5774 162 13.9024 162 14.3574V14.977C160.446 14.977 158.857 14.9086 158.857 12.9447ZM98.1929 10.1124C98.2033 6.94046 100.598 5.16809 102.895 5.16809C104.171 5.16809 105.342 5.44285 106.304 6.12953L105.914 6.6631C105.654 7.02011 105.16 7.16194 104.749 6.99949C104.169 6.7702 103.622 6.7218 103.215 6.7218C101.335 6.7218 99.9169 7.92849 99.9068 10.1123C99.9169 12.2959 101.335 13.5201 103.215 13.5201C103.622 13.5201 104.169 13.4717 104.749 13.2424C105.16 13.0799 105.654 13.2046 105.914 13.5615L106.304 14.0952C105.342 14.7819 104.171 15.0566 102.895 15.0566C100.598 15.0566 98.2033 13.2842 98.1929 10.1124ZM147.619 5.21768C148.074 5.21768 148.444 5.58663 148.444 6.04174V9.81968L151.82 5.58131C151.897 5.47733 151.997 5.39282 152.112 5.3346C152.227 5.27638 152.355 5.24607 152.484 5.24611H153.984L150.166 10.0615L153.984 14.8749H152.484C152.355 14.8749 152.227 14.8446 152.112 14.7864C151.997 14.7281 151.897 14.6436 151.82 14.5397L148.444 10.3025V14.0508C148.444 14.5059 148.074 14.8749 147.619 14.8749H146.746V5.21768H147.619Z' fill='white'/%3E%3Cpath d='M0.773438 6.5752H2.68066C3.56543 6.5752 4.2041 6.7041 4.59668 6.96191C4.99219 7.21973 5.18994 7.62695 5.18994 8.18359C5.18994 8.55859 5.09326 8.87061 4.8999 9.11963C4.70654 9.36865 4.42822 9.52539 4.06494 9.58984V9.63379C4.51611 9.71875 4.84717 9.88721 5.05811 10.1392C5.27197 10.3882 5.37891 10.7266 5.37891 11.1543C5.37891 11.7314 5.17676 12.1841 4.77246 12.5122C4.37109 12.8374 3.81152 13 3.09375 13H0.773438V6.5752ZM1.82373 9.22949H2.83447C3.27393 9.22949 3.59473 9.16064 3.79688 9.02295C3.99902 8.88232 4.1001 8.64502 4.1001 8.31104C4.1001 8.00928 3.99023 7.79102 3.77051 7.65625C3.55371 7.52148 3.20801 7.4541 2.7334 7.4541H1.82373V9.22949ZM1.82373 10.082V12.1167H2.93994C3.37939 12.1167 3.71045 12.0332 3.93311 11.8662C4.15869 11.6963 4.27148 11.4297 4.27148 11.0664C4.27148 10.7324 4.15723 10.4849 3.92871 10.3237C3.7002 10.1626 3.35303 10.082 2.88721 10.082H1.82373Z' fill='white'/%3E%3Cpath d='M13.011 6.5752V10.7324C13.011 11.207 12.9084 11.623 12.7034 11.9805C12.5012 12.335 12.2068 12.6089 11.8201 12.8022C11.4363 12.9927 10.9763 13.0879 10.4402 13.0879C9.6433 13.0879 9.02368 12.877 8.5813 12.4551C8.13892 12.0332 7.91772 11.4531 7.91772 10.7148V6.5752H8.9724V10.6401C8.9724 11.1704 9.09546 11.5615 9.34155 11.8135C9.58765 12.0654 9.96557 12.1914 10.4753 12.1914C11.4656 12.1914 11.9607 11.6714 11.9607 10.6313V6.5752H13.011Z' fill='white'/%3E%3Cpath d='M15.9146 13V6.5752H16.9649V13H15.9146Z' fill='white'/%3E%3Cpath d='M19.9255 13V6.5752H20.9758V12.0991H23.696V13H19.9255Z' fill='white'/%3E%3Cpath d='M28.2828 13H27.2325V7.47607H25.3428V6.5752H30.1724V7.47607H28.2828V13Z' fill='white'/%3E%3Cpath d='M41.9472 13H40.8046L39.7148 9.16796C39.6679 9.00097 39.6093 8.76074 39.539 8.44727C39.4687 8.13086 39.4262 7.91113 39.4116 7.78809C39.3823 7.97559 39.3339 8.21875 39.2665 8.51758C39.2021 8.81641 39.1479 9.03905 39.1039 9.18554L38.0405 13H36.8979L36.0673 9.7832L35.2236 6.5752H36.2958L37.2143 10.3193C37.3578 10.9199 37.4604 11.4502 37.5219 11.9102C37.5541 11.6611 37.6025 11.3828 37.6669 11.0752C37.7314 10.7676 37.79 10.5186 37.8427 10.3281L38.8886 6.5752H39.9301L41.0024 10.3457C41.1049 10.6943 41.2133 11.2158 41.3276 11.9102C41.3715 11.4912 41.477 10.958 41.644 10.3105L42.558 6.5752H43.6215L41.9472 13Z' fill='white'/%3E%3Cpath d='M45.7957 13V6.5752H46.846V13H45.7957Z' fill='white'/%3E%3Cpath d='M52.0258 13H50.9755V7.47607H49.0859V6.5752H53.9155V7.47607H52.0258V13Z' fill='white'/%3E%3Cpath d='M61.2312 13H60.1765V10.104H57.2146V13H56.1643V6.5752H57.2146V9.20312H60.1765V6.5752H61.2312V13Z' fill='white'/%3E%3C/svg%3E");
    }

    @-webkit-keyframes formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- {

      0%,
      80%,
      100% {
        -webkit-transform: scale(0);
        -ms-transform: scale(0);
        transform: scale(0);
      }

      40% {
        -webkit-transform: scale(1);
        -ms-transform: scale(1);
        transform: scale(1);
      }
    }

    @keyframes formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- {

      0%,
      80%,
      100% {
        -webkit-transform: scale(0);
        -ms-transform: scale(0);
        transform: scale(0);
      }

      40% {
        -webkit-transform: scale(1);
        -ms-transform: scale(1);
        transform: scale(1);
      }
    }

    .formkit-form[data-uid="b3e2fda9e7"] blockquote {
      padding: 10px 20px;
      margin: 0 0 20px;
      border-left: 5px solid #e1e1e1;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .seva-custom-content {
      padding: 15px;
      font-size: 16px;
      color: #fff;
      mix-blend-mode: difference;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-modal.guard {
      max-width: 420px;
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] {
      max-width: 700px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-style="clean"] {
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-fields {
      display: -webkit-box;
      display: -webkit-flex;
      display: -ms-flexbox;
      display: flex;
      -webkit-flex-wrap: wrap;
      -ms-flex-wrap: wrap;
      flex-wrap: wrap;
      margin: 0 auto;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      margin: 0 0 15px 0;
      -webkit-flex: 1 0 100%;
      -ms-flex: 1 0 100%;
      flex: 1 0 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit-container {
      margin: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      position: static;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] [data-style="clean"],
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] [data-style="clean"] {
      padding: 10px;
      padding-top: 56px;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"],
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] {
      margin-left: -5px;
      margin-right: -5px;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-submit,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-submit {
      margin: 0 5px 15px 5px;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-field {
      -webkit-flex: 100 1 auto;
      -ms-flex: 100 1 auto;
      flex: 100 1 auto;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-submit,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-submit {
      -webkit-flex: 1 1 auto;
      -ms-flex: 1 1 auto;
      flex: 1 1 auto;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input {
      color: #727272;
      border-color: #b2b2b2;
      height: 28px;
      font-weight: 400;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      color: #ffffff;
      height: 27px;
      line-height: 4px;
      border-radius: 2px;
      font-weight: 400;
    }
  </style>
</form>

POST https://app.convertkit.com/forms/6652366/subscriptions

<form action="https://app.convertkit.com/forms/6652366/subscriptions" class="seva-form formkit-form" method="post" data-sv-form="6652366" data-uid="b3e2fda9e7" data-format="inline" data-version="5" min-width="400 500 600 700">
  <div data-style="clean">
    <ul class="formkit-alert formkit-alert-error" data-element="errors" data-group="alert"></ul>
    <div data-element="fields" data-stacked="false" class="seva-fields formkit-fields">
      <div class="formkit-field"><input type="text" class="formkit-input" name="email_address" style="color:#797979;border-color:#e3e3e3;border-radius:4px;font-weight:400" aria-label="Email Address" placeholder="Email Address" required=""></div>
      <button data-element="submit" class="formkit-submit formkit-submit" style="color:#fff;background-color:#378973;border-radius:4px;font-weight:400">
        <div class="formkit-spinner">
          <div></div>
          <div></div>
          <div></div>
        </div><span class="">Subscribe</span>
      </button>
    </div>
  </div>
  <style>
    .formkit-form[data-uid="b3e2fda9e7"] * {
      box-sizing: border-box;
    }

    .formkit-form[data-uid="b3e2fda9e7"] {
      -webkit-font-smoothing: antialiased;
      -moz-osx-font-smoothing: grayscale;
    }

    .formkit-form[data-uid="b3e2fda9e7"] legend {
      border: none;
      font-size: inherit;
      margin-bottom: 10px;
      padding: 0;
      position: relative;
      display: table;
    }

    .formkit-form[data-uid="b3e2fda9e7"] fieldset {
      border: 0;
      padding: 0.01em 0 0 0;
      margin: 0;
      min-width: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] body:not(:-moz-handler-blocked) fieldset {
      display: table-cell;
    }

    .formkit-form[data-uid="b3e2fda9e7"] h1,
    .formkit-form[data-uid="b3e2fda9e7"] h2,
    .formkit-form[data-uid="b3e2fda9e7"] h3,
    .formkit-form[data-uid="b3e2fda9e7"] h4,
    .formkit-form[data-uid="b3e2fda9e7"] h5,
    .formkit-form[data-uid="b3e2fda9e7"] h6 {
      color: inherit;
      font-size: inherit;
      font-weight: inherit;
    }

    .formkit-form[data-uid="b3e2fda9e7"] h2 {
      font-size: 1.5em;
      margin: 1em 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] h3 {
      font-size: 1.17em;
      margin: 1em 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] p {
      color: inherit;
      font-size: inherit;
      font-weight: inherit;
    }

    .formkit-form[data-uid="b3e2fda9e7"] ol:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] ul:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] blockquote:not([template-default]) {
      text-align: left;
    }

    .formkit-form[data-uid="b3e2fda9e7"] p:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] hr:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] blockquote:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] ol:not([template-default]),
    .formkit-form[data-uid="b3e2fda9e7"] ul:not([template-default]) {
      color: inherit;
      font-style: initial;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .ordered-list,
    .formkit-form[data-uid="b3e2fda9e7"] .unordered-list {
      list-style-position: outside !important;
      padding-left: 1em;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .list-item {
      padding-left: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"][data-format="modal"] {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"][data-format="slide in"] {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"][data-format="sticky bar"] {
      display: none;
    }

    .formkit-sticky-bar .formkit-form[data-uid="b3e2fda9e7"][data-format="sticky bar"] {
      display: block;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-select,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-checkboxes {
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      border: 0;
      border-radius: 5px;
      color: #ffffff;
      cursor: pointer;
      display: inline-block;
      text-align: center;
      font-size: 15px;
      font-weight: 500;
      cursor: pointer;
      margin-bottom: 15px;
      overflow: hidden;
      padding: 0;
      position: relative;
      vertical-align: middle;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:hover,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:hover,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:focus,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:focus {
      outline: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:hover>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:hover>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button:focus>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit:focus>span {
      background-color: rgba(0, 0, 0, 0.1);
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-button>span,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit>span {
      display: block;
      -webkit-transition: all 300ms ease-in-out;
      transition: all 300ms ease-in-out;
      padding: 12px 24px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input {
      background: #ffffff;
      font-size: 15px;
      padding: 12px;
      border: 1px solid #e3e3e3;
      -webkit-flex: 1 0 auto;
      -ms-flex: 1 0 auto;
      flex: 1 0 auto;
      line-height: 1.4;
      margin: 0;
      -webkit-transition: border-color ease-out 300ms;
      transition: border-color ease-out 300ms;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input:focus {
      outline: none;
      border-color: #1677be;
      -webkit-transition: border-color ease 300ms;
      transition: border-color ease 300ms;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input::-webkit-input-placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input::-moz-placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input:-ms-input-placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input::placeholder {
      color: inherit;
      opacity: 0.8;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"] {
      position: relative;
      display: inline-block;
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"]::before {
      content: "";
      top: calc(50% - 2.5px);
      right: 10px;
      position: absolute;
      pointer-events: none;
      border-color: #4f4f4f transparent transparent transparent;
      border-style: solid;
      border-width: 6px 6px 0 6px;
      height: 0;
      width: 0;
      z-index: 999;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"] select {
      height: auto;
      width: 100%;
      cursor: pointer;
      color: #333333;
      line-height: 1.4;
      margin-bottom: 0;
      padding: 0 6px;
      -webkit-appearance: none;
      -moz-appearance: none;
      appearance: none;
      font-size: 15px;
      padding: 12px;
      padding-right: 25px;
      border: 1px solid #e3e3e3;
      background: #ffffff;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="dropdown"] select:focus {
      outline: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] {
      text-align: left;
      margin: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] {
      margin-bottom: 10px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] * {
      cursor: pointer;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"]:last-of-type {
      margin-bottom: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"] {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"]+label::after {
      content: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"]:checked+label::after {
      border-color: #ffffff;
      content: "";
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] input[type="checkbox"]:checked+label::before {
      background: #10bf7a;
      border-color: #10bf7a;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label {
      position: relative;
      display: inline-block;
      padding-left: 28px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::before,
    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::after {
      position: absolute;
      content: "";
      display: inline-block;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::before {
      height: 16px;
      width: 16px;
      border: 1px solid #e3e3e3;
      background: #ffffff;
      left: 0px;
      top: 3px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-group="checkboxes"] [data-group="checkbox"] label::after {
      height: 4px;
      width: 8px;
      border-left: 2px solid #4d4d4d;
      border-bottom: 2px solid #4d4d4d;
      -webkit-transform: rotate(-45deg);
      -ms-transform: rotate(-45deg);
      transform: rotate(-45deg);
      left: 4px;
      top: 8px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert {
      background: #f9fafb;
      border: 1px solid #e3e3e3;
      border-radius: 5px;
      -webkit-flex: 1 0 auto;
      -ms-flex: 1 0 auto;
      flex: 1 0 auto;
      list-style: none;
      margin: 25px auto;
      padding: 12px;
      text-align: center;
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert:empty {
      display: none;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert-success {
      background: #d3fbeb;
      border-color: #10bf7a;
      color: #0c905c;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-alert-error {
      background: #fde8e2;
      border-color: #f2643b;
      color: #ea4110;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner {
      display: -webkit-box;
      display: -webkit-flex;
      display: -ms-flexbox;
      display: flex;
      height: 0px;
      width: 0px;
      margin: 0 auto;
      position: absolute;
      top: 0;
      left: 0;
      right: 0;
      width: 0px;
      overflow: hidden;
      text-align: center;
      -webkit-transition: all 300ms ease-in-out;
      transition: all 300ms ease-in-out;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner>div {
      margin: auto;
      width: 12px;
      height: 12px;
      background-color: #fff;
      opacity: 0.3;
      border-radius: 100%;
      display: inline-block;
      -webkit-animation: formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- 1.4s infinite ease-in-out both;
      animation: formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- 1.4s infinite ease-in-out both;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner>div:nth-child(1) {
      -webkit-animation-delay: -0.32s;
      animation-delay: -0.32s;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-spinner>div:nth-child(2) {
      -webkit-animation-delay: -0.16s;
      animation-delay: -0.16s;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit[data-active] .formkit-spinner {
      opacity: 1;
      height: 100%;
      width: 50px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit[data-active] .formkit-spinner~span {
      opacity: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by[data-active="false"] {
      opacity: 0.35;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit-container {
      display: -webkit-box;
      display: -webkit-flex;
      display: -ms-flexbox;
      display: flex;
      width: 100%;
      margin: 10px 0;
      position: relative;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit-container[data-active="false"] {
      opacity: 0.35;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit {
      -webkit-align-items: center;
      -webkit-box-align: center;
      -ms-flex-align: center;
      align-items: center;
      background-color: #ffffff;
      border: 1px solid #dde2e7;
      border-radius: 4px;
      color: #373f45;
      cursor: pointer;
      display: block;
      height: 36px;
      margin: 0 auto;
      opacity: 0.95;
      padding: 0;
      -webkit-text-decoration: none;
      text-decoration: none;
      text-indent: 100%;
      -webkit-transition: ease-in-out all 200ms;
      transition: ease-in-out all 200ms;
      white-space: nowrap;
      overflow: hidden;
      -webkit-user-select: none;
      -moz-user-select: none;
      -ms-user-select: none;
      user-select: none;
      width: 190px;
      background-repeat: no-repeat;
      background-position: center;
      background-image: url("data:image/svg+xml;charset=utf8,%3Csvg width='162' height='20' viewBox='0 0 162 20' fill='none' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M83.0561 15.2457C86.675 15.2457 89.4722 12.5154 89.4722 9.14749C89.4722 5.99211 86.8443 4.06563 85.1038 4.06563C82.6801 4.06563 80.7373 5.76407 80.4605 8.28551C80.4092 8.75244 80.0387 9.14403 79.5686 9.14069C78.7871 9.13509 77.6507 9.12841 76.9314 9.13092C76.6217 9.13199 76.3658 8.88106 76.381 8.57196C76.4895 6.38513 77.2218 4.3404 78.618 2.76974C80.1695 1.02445 82.4289 0 85.1038 0C89.5979 0 93.8406 4.07791 93.8406 9.14749C93.8406 14.7608 89.1832 19.3113 83.1517 19.3113C78.8502 19.3113 74.5179 16.5041 73.0053 12.5795C72.9999 12.565 72.9986 12.5492 73.0015 12.534C73.0218 12.4179 73.0617 12.3118 73.1011 12.2074C73.1583 12.0555 73.2143 11.907 73.2062 11.7359L73.18 11.1892C73.174 11.0569 73.2075 10.9258 73.2764 10.8127C73.3452 10.6995 73.4463 10.6094 73.5666 10.554L73.7852 10.4523C73.9077 10.3957 74.0148 10.3105 74.0976 10.204C74.1803 10.0974 74.2363 9.97252 74.2608 9.83983C74.3341 9.43894 74.6865 9.14749 75.0979 9.14749C75.7404 9.14749 76.299 9.57412 76.5088 10.1806C77.5188 13.1 79.1245 15.2457 83.0561 15.2457Z' fill='%23373F45'/%3E%3Cpath d='M155.758 6.91365C155.028 6.91365 154.804 6.47916 154.804 5.98857C154.804 5.46997 154.986 5.06348 155.758 5.06348C156.53 5.06348 156.712 5.46997 156.712 5.98857C156.712 6.47905 156.516 6.91365 155.758 6.91365ZM142.441 12.9304V9.32833L141.415 9.32323V8.90392C141.415 8.44719 141.786 8.07758 142.244 8.07986L142.441 8.08095V6.55306L144.082 6.09057V8.08073H145.569V8.50416C145.569 8.61242 145.548 8.71961 145.506 8.81961C145.465 8.91961 145.404 9.01047 145.328 9.08699C145.251 9.16351 145.16 9.2242 145.06 9.26559C144.96 9.30698 144.853 9.32826 144.745 9.32822H144.082V12.7201C144.082 13.2423 144.378 13.4256 144.76 13.4887C145.209 13.5629 145.583 13.888 145.583 14.343V14.9626C144.029 14.9626 142.441 14.8942 142.441 12.9304Z' fill='%23373F45'/%3E%3Cpath d='M110.058 7.92554C108.417 7.88344 106.396 8.92062 106.396 11.5137C106.396 14.0646 108.417 15.0738 110.058 15.0318C111.742 15.0738 113.748 14.0646 113.748 11.5137C113.748 8.92062 111.742 7.88344 110.058 7.92554ZM110.07 13.7586C108.878 13.7586 108.032 12.8905 108.032 11.461C108.032 10.1013 108.878 9.20569 110.071 9.20569C111.263 9.20569 112.101 10.0995 112.101 11.459C112.101 12.8887 111.263 13.7586 110.07 13.7586Z' fill='%23373F45'/%3E%3Cpath d='M118.06 7.94098C119.491 7.94098 120.978 8.33337 120.978 11.1366V14.893H120.063C119.608 14.893 119.238 14.524 119.238 14.0689V10.9965C119.238 9.66506 118.747 9.16047 117.891 9.16047C117.414 9.16047 116.797 9.52486 116.502 9.81915V14.069C116.502 14.1773 116.481 14.2845 116.44 14.3845C116.398 14.4845 116.337 14.5753 116.261 14.6519C116.184 14.7284 116.093 14.7891 115.993 14.8305C115.893 14.8719 115.786 14.8931 115.678 14.8931H114.847V8.10918H115.773C115.932 8.10914 116.087 8.16315 116.212 8.26242C116.337 8.36168 116.424 8.50033 116.46 8.65577C116.881 8.19328 117.428 7.94098 118.06 7.94098ZM122.854 8.09713C123.024 8.09708 123.19 8.1496 123.329 8.2475C123.468 8.34541 123.574 8.48391 123.631 8.64405L125.133 12.8486L126.635 8.64415C126.692 8.48402 126.798 8.34551 126.937 8.2476C127.076 8.1497 127.242 8.09718 127.412 8.09724H128.598L126.152 14.3567C126.091 14.5112 125.986 14.6439 125.849 14.7374C125.711 14.831 125.549 14.881 125.383 14.8809H124.333L121.668 8.09713H122.854Z' fill='%23373F45'/%3E%3Cpath d='M135.085 14.5514C134.566 14.7616 133.513 15.0416 132.418 15.0416C130.496 15.0416 129.024 13.9345 129.024 11.4396C129.024 9.19701 130.451 7.99792 132.191 7.99792C134.338 7.99792 135.254 9.4378 135.158 11.3979C135.139 11.8029 134.786 12.0983 134.38 12.0983H130.679C130.763 13.1916 131.562 13.7662 132.615 13.7662C133.028 13.7662 133.462 13.7452 133.983 13.6481C134.535 13.545 135.085 13.9375 135.085 14.4985V14.5514ZM133.673 10.949C133.785 9.87621 133.061 9.28752 132.191 9.28752C131.321 9.28752 130.734 9.93979 130.679 10.9489L133.673 10.949Z' fill='%23373F45'/%3E%3Cpath d='M137.345 8.11122C137.497 8.11118 137.645 8.16229 137.765 8.25635C137.884 8.35041 137.969 8.48197 138.005 8.62993C138.566 8.20932 139.268 7.94303 139.759 7.94303C139.801 7.94303 140.068 7.94303 140.489 7.99913V8.7265C140.489 9.11748 140.15 9.4147 139.759 9.4147C139.31 9.4147 138.651 9.5829 138.131 9.8773V14.8951H136.462V8.11112L137.345 8.11122ZM156.6 14.0508V8.09104H155.769C155.314 8.09104 154.944 8.45999 154.944 8.9151V14.8748H155.775C156.23 14.8748 156.6 14.5058 156.6 14.0508ZM158.857 12.9447V9.34254H157.749V8.91912C157.749 8.46401 158.118 8.09506 158.574 8.09506H158.857V6.56739L160.499 6.10479V8.09506H161.986V8.51848C161.986 8.97359 161.617 9.34254 161.161 9.34254H160.499V12.7345C160.499 13.2566 160.795 13.44 161.177 13.503C161.626 13.5774 162 13.9024 162 14.3574V14.977C160.446 14.977 158.857 14.9086 158.857 12.9447ZM98.1929 10.1124C98.2033 6.94046 100.598 5.16809 102.895 5.16809C104.171 5.16809 105.342 5.44285 106.304 6.12953L105.914 6.6631C105.654 7.02011 105.16 7.16194 104.749 6.99949C104.169 6.7702 103.622 6.7218 103.215 6.7218C101.335 6.7218 99.9169 7.92849 99.9068 10.1123C99.9169 12.2959 101.335 13.5201 103.215 13.5201C103.622 13.5201 104.169 13.4717 104.749 13.2424C105.16 13.0799 105.654 13.2046 105.914 13.5615L106.304 14.0952C105.342 14.7819 104.171 15.0566 102.895 15.0566C100.598 15.0566 98.2033 13.2842 98.1929 10.1124ZM147.619 5.21768C148.074 5.21768 148.444 5.58663 148.444 6.04174V9.81968L151.82 5.58131C151.897 5.47733 151.997 5.39282 152.112 5.3346C152.227 5.27638 152.355 5.24607 152.484 5.24611H153.984L150.166 10.0615L153.984 14.8749H152.484C152.355 14.8749 152.227 14.8446 152.112 14.7864C151.997 14.7281 151.897 14.6436 151.82 14.5397L148.444 10.3025V14.0508C148.444 14.5059 148.074 14.8749 147.619 14.8749H146.746V5.21768H147.619Z' fill='%23373F45'/%3E%3Cpath d='M0.773438 6.5752H2.68066C3.56543 6.5752 4.2041 6.7041 4.59668 6.96191C4.99219 7.21973 5.18994 7.62695 5.18994 8.18359C5.18994 8.55859 5.09326 8.87061 4.8999 9.11963C4.70654 9.36865 4.42822 9.52539 4.06494 9.58984V9.63379C4.51611 9.71875 4.84717 9.88721 5.05811 10.1392C5.27197 10.3882 5.37891 10.7266 5.37891 11.1543C5.37891 11.7314 5.17676 12.1841 4.77246 12.5122C4.37109 12.8374 3.81152 13 3.09375 13H0.773438V6.5752ZM1.82373 9.22949H2.83447C3.27393 9.22949 3.59473 9.16064 3.79688 9.02295C3.99902 8.88232 4.1001 8.64502 4.1001 8.31104C4.1001 8.00928 3.99023 7.79102 3.77051 7.65625C3.55371 7.52148 3.20801 7.4541 2.7334 7.4541H1.82373V9.22949ZM1.82373 10.082V12.1167H2.93994C3.37939 12.1167 3.71045 12.0332 3.93311 11.8662C4.15869 11.6963 4.27148 11.4297 4.27148 11.0664C4.27148 10.7324 4.15723 10.4849 3.92871 10.3237C3.7002 10.1626 3.35303 10.082 2.88721 10.082H1.82373Z' fill='%23373F45'/%3E%3Cpath d='M13.011 6.5752V10.7324C13.011 11.207 12.9084 11.623 12.7034 11.9805C12.5012 12.335 12.2068 12.6089 11.8201 12.8022C11.4363 12.9927 10.9763 13.0879 10.4402 13.0879C9.6433 13.0879 9.02368 12.877 8.5813 12.4551C8.13892 12.0332 7.91772 11.4531 7.91772 10.7148V6.5752H8.9724V10.6401C8.9724 11.1704 9.09546 11.5615 9.34155 11.8135C9.58765 12.0654 9.96557 12.1914 10.4753 12.1914C11.4656 12.1914 11.9607 11.6714 11.9607 10.6313V6.5752H13.011Z' fill='%23373F45'/%3E%3Cpath d='M15.9146 13V6.5752H16.9649V13H15.9146Z' fill='%23373F45'/%3E%3Cpath d='M19.9255 13V6.5752H20.9758V12.0991H23.696V13H19.9255Z' fill='%23373F45'/%3E%3Cpath d='M28.2828 13H27.2325V7.47607H25.3428V6.5752H30.1724V7.47607H28.2828V13Z' fill='%23373F45'/%3E%3Cpath d='M41.9472 13H40.8046L39.7148 9.16796C39.6679 9.00097 39.6093 8.76074 39.539 8.44727C39.4687 8.13086 39.4262 7.91113 39.4116 7.78809C39.3823 7.97559 39.3339 8.21875 39.2665 8.51758C39.2021 8.81641 39.1479 9.03905 39.1039 9.18554L38.0405 13H36.8979L36.0673 9.7832L35.2236 6.5752H36.2958L37.2143 10.3193C37.3578 10.9199 37.4604 11.4502 37.5219 11.9102C37.5541 11.6611 37.6025 11.3828 37.6669 11.0752C37.7314 10.7676 37.79 10.5186 37.8427 10.3281L38.8886 6.5752H39.9301L41.0024 10.3457C41.1049 10.6943 41.2133 11.2158 41.3276 11.9102C41.3715 11.4912 41.477 10.958 41.644 10.3105L42.558 6.5752H43.6215L41.9472 13Z' fill='%23373F45'/%3E%3Cpath d='M45.7957 13V6.5752H46.846V13H45.7957Z' fill='%23373F45'/%3E%3Cpath d='M52.0258 13H50.9755V7.47607H49.0859V6.5752H53.9155V7.47607H52.0258V13Z' fill='%23373F45'/%3E%3Cpath d='M61.2312 13H60.1765V10.104H57.2146V13H56.1643V6.5752H57.2146V9.20312H60.1765V6.5752H61.2312V13Z' fill='%23373F45'/%3E%3C/svg%3E");
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit:hover,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit:focus {
      background-color: #ffffff;
      -webkit-transform: scale(1.025) perspective(1px);
      -ms-transform: scale(1.025) perspective(1px);
      transform: scale(1.025) perspective(1px);
      opacity: 1;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit[data-variant="dark"],
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit[data-variant="light"] {
      background-color: transparent;
      border-color: transparent;
      width: 166px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit[data-variant="light"] {
      color: #ffffff;
      background-image: url("data:image/svg+xml;charset=utf8,%3Csvg width='162' height='20' viewBox='0 0 162 20' fill='none' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M83.0561 15.2457C86.675 15.2457 89.4722 12.5154 89.4722 9.14749C89.4722 5.99211 86.8443 4.06563 85.1038 4.06563C82.6801 4.06563 80.7373 5.76407 80.4605 8.28551C80.4092 8.75244 80.0387 9.14403 79.5686 9.14069C78.7871 9.13509 77.6507 9.12841 76.9314 9.13092C76.6217 9.13199 76.3658 8.88106 76.381 8.57196C76.4895 6.38513 77.2218 4.3404 78.618 2.76974C80.1695 1.02445 82.4289 0 85.1038 0C89.5979 0 93.8406 4.07791 93.8406 9.14749C93.8406 14.7608 89.1832 19.3113 83.1517 19.3113C78.8502 19.3113 74.5179 16.5041 73.0053 12.5795C72.9999 12.565 72.9986 12.5492 73.0015 12.534C73.0218 12.4179 73.0617 12.3118 73.1011 12.2074C73.1583 12.0555 73.2143 11.907 73.2062 11.7359L73.18 11.1892C73.174 11.0569 73.2075 10.9258 73.2764 10.8127C73.3452 10.6995 73.4463 10.6094 73.5666 10.554L73.7852 10.4523C73.9077 10.3957 74.0148 10.3105 74.0976 10.204C74.1803 10.0974 74.2363 9.97252 74.2608 9.83983C74.3341 9.43894 74.6865 9.14749 75.0979 9.14749C75.7404 9.14749 76.299 9.57412 76.5088 10.1806C77.5188 13.1 79.1245 15.2457 83.0561 15.2457Z' fill='white'/%3E%3Cpath d='M155.758 6.91365C155.028 6.91365 154.804 6.47916 154.804 5.98857C154.804 5.46997 154.986 5.06348 155.758 5.06348C156.53 5.06348 156.712 5.46997 156.712 5.98857C156.712 6.47905 156.516 6.91365 155.758 6.91365ZM142.441 12.9304V9.32833L141.415 9.32323V8.90392C141.415 8.44719 141.786 8.07758 142.244 8.07986L142.441 8.08095V6.55306L144.082 6.09057V8.08073H145.569V8.50416C145.569 8.61242 145.548 8.71961 145.506 8.81961C145.465 8.91961 145.404 9.01047 145.328 9.08699C145.251 9.16351 145.16 9.2242 145.06 9.26559C144.96 9.30698 144.853 9.32826 144.745 9.32822H144.082V12.7201C144.082 13.2423 144.378 13.4256 144.76 13.4887C145.209 13.5629 145.583 13.888 145.583 14.343V14.9626C144.029 14.9626 142.441 14.8942 142.441 12.9304Z' fill='white'/%3E%3Cpath d='M110.058 7.92554C108.417 7.88344 106.396 8.92062 106.396 11.5137C106.396 14.0646 108.417 15.0738 110.058 15.0318C111.742 15.0738 113.748 14.0646 113.748 11.5137C113.748 8.92062 111.742 7.88344 110.058 7.92554ZM110.07 13.7586C108.878 13.7586 108.032 12.8905 108.032 11.461C108.032 10.1013 108.878 9.20569 110.071 9.20569C111.263 9.20569 112.101 10.0995 112.101 11.459C112.101 12.8887 111.263 13.7586 110.07 13.7586Z' fill='white'/%3E%3Cpath d='M118.06 7.94098C119.491 7.94098 120.978 8.33337 120.978 11.1366V14.893H120.063C119.608 14.893 119.238 14.524 119.238 14.0689V10.9965C119.238 9.66506 118.747 9.16047 117.891 9.16047C117.414 9.16047 116.797 9.52486 116.502 9.81915V14.069C116.502 14.1773 116.481 14.2845 116.44 14.3845C116.398 14.4845 116.337 14.5753 116.261 14.6519C116.184 14.7284 116.093 14.7891 115.993 14.8305C115.893 14.8719 115.786 14.8931 115.678 14.8931H114.847V8.10918H115.773C115.932 8.10914 116.087 8.16315 116.212 8.26242C116.337 8.36168 116.424 8.50033 116.46 8.65577C116.881 8.19328 117.428 7.94098 118.06 7.94098ZM122.854 8.09713C123.024 8.09708 123.19 8.1496 123.329 8.2475C123.468 8.34541 123.574 8.48391 123.631 8.64405L125.133 12.8486L126.635 8.64415C126.692 8.48402 126.798 8.34551 126.937 8.2476C127.076 8.1497 127.242 8.09718 127.412 8.09724H128.598L126.152 14.3567C126.091 14.5112 125.986 14.6439 125.849 14.7374C125.711 14.831 125.549 14.881 125.383 14.8809H124.333L121.668 8.09713H122.854Z' fill='white'/%3E%3Cpath d='M135.085 14.5514C134.566 14.7616 133.513 15.0416 132.418 15.0416C130.496 15.0416 129.024 13.9345 129.024 11.4396C129.024 9.19701 130.451 7.99792 132.191 7.99792C134.338 7.99792 135.254 9.4378 135.158 11.3979C135.139 11.8029 134.786 12.0983 134.38 12.0983H130.679C130.763 13.1916 131.562 13.7662 132.615 13.7662C133.028 13.7662 133.462 13.7452 133.983 13.6481C134.535 13.545 135.085 13.9375 135.085 14.4985V14.5514ZM133.673 10.949C133.785 9.87621 133.061 9.28752 132.191 9.28752C131.321 9.28752 130.734 9.93979 130.679 10.9489L133.673 10.949Z' fill='white'/%3E%3Cpath d='M137.345 8.11122C137.497 8.11118 137.645 8.16229 137.765 8.25635C137.884 8.35041 137.969 8.48197 138.005 8.62993C138.566 8.20932 139.268 7.94303 139.759 7.94303C139.801 7.94303 140.068 7.94303 140.489 7.99913V8.7265C140.489 9.11748 140.15 9.4147 139.759 9.4147C139.31 9.4147 138.651 9.5829 138.131 9.8773V14.8951H136.462V8.11112L137.345 8.11122ZM156.6 14.0508V8.09104H155.769C155.314 8.09104 154.944 8.45999 154.944 8.9151V14.8748H155.775C156.23 14.8748 156.6 14.5058 156.6 14.0508ZM158.857 12.9447V9.34254H157.749V8.91912C157.749 8.46401 158.118 8.09506 158.574 8.09506H158.857V6.56739L160.499 6.10479V8.09506H161.986V8.51848C161.986 8.97359 161.617 9.34254 161.161 9.34254H160.499V12.7345C160.499 13.2566 160.795 13.44 161.177 13.503C161.626 13.5774 162 13.9024 162 14.3574V14.977C160.446 14.977 158.857 14.9086 158.857 12.9447ZM98.1929 10.1124C98.2033 6.94046 100.598 5.16809 102.895 5.16809C104.171 5.16809 105.342 5.44285 106.304 6.12953L105.914 6.6631C105.654 7.02011 105.16 7.16194 104.749 6.99949C104.169 6.7702 103.622 6.7218 103.215 6.7218C101.335 6.7218 99.9169 7.92849 99.9068 10.1123C99.9169 12.2959 101.335 13.5201 103.215 13.5201C103.622 13.5201 104.169 13.4717 104.749 13.2424C105.16 13.0799 105.654 13.2046 105.914 13.5615L106.304 14.0952C105.342 14.7819 104.171 15.0566 102.895 15.0566C100.598 15.0566 98.2033 13.2842 98.1929 10.1124ZM147.619 5.21768C148.074 5.21768 148.444 5.58663 148.444 6.04174V9.81968L151.82 5.58131C151.897 5.47733 151.997 5.39282 152.112 5.3346C152.227 5.27638 152.355 5.24607 152.484 5.24611H153.984L150.166 10.0615L153.984 14.8749H152.484C152.355 14.8749 152.227 14.8446 152.112 14.7864C151.997 14.7281 151.897 14.6436 151.82 14.5397L148.444 10.3025V14.0508C148.444 14.5059 148.074 14.8749 147.619 14.8749H146.746V5.21768H147.619Z' fill='white'/%3E%3Cpath d='M0.773438 6.5752H2.68066C3.56543 6.5752 4.2041 6.7041 4.59668 6.96191C4.99219 7.21973 5.18994 7.62695 5.18994 8.18359C5.18994 8.55859 5.09326 8.87061 4.8999 9.11963C4.70654 9.36865 4.42822 9.52539 4.06494 9.58984V9.63379C4.51611 9.71875 4.84717 9.88721 5.05811 10.1392C5.27197 10.3882 5.37891 10.7266 5.37891 11.1543C5.37891 11.7314 5.17676 12.1841 4.77246 12.5122C4.37109 12.8374 3.81152 13 3.09375 13H0.773438V6.5752ZM1.82373 9.22949H2.83447C3.27393 9.22949 3.59473 9.16064 3.79688 9.02295C3.99902 8.88232 4.1001 8.64502 4.1001 8.31104C4.1001 8.00928 3.99023 7.79102 3.77051 7.65625C3.55371 7.52148 3.20801 7.4541 2.7334 7.4541H1.82373V9.22949ZM1.82373 10.082V12.1167H2.93994C3.37939 12.1167 3.71045 12.0332 3.93311 11.8662C4.15869 11.6963 4.27148 11.4297 4.27148 11.0664C4.27148 10.7324 4.15723 10.4849 3.92871 10.3237C3.7002 10.1626 3.35303 10.082 2.88721 10.082H1.82373Z' fill='white'/%3E%3Cpath d='M13.011 6.5752V10.7324C13.011 11.207 12.9084 11.623 12.7034 11.9805C12.5012 12.335 12.2068 12.6089 11.8201 12.8022C11.4363 12.9927 10.9763 13.0879 10.4402 13.0879C9.6433 13.0879 9.02368 12.877 8.5813 12.4551C8.13892 12.0332 7.91772 11.4531 7.91772 10.7148V6.5752H8.9724V10.6401C8.9724 11.1704 9.09546 11.5615 9.34155 11.8135C9.58765 12.0654 9.96557 12.1914 10.4753 12.1914C11.4656 12.1914 11.9607 11.6714 11.9607 10.6313V6.5752H13.011Z' fill='white'/%3E%3Cpath d='M15.9146 13V6.5752H16.9649V13H15.9146Z' fill='white'/%3E%3Cpath d='M19.9255 13V6.5752H20.9758V12.0991H23.696V13H19.9255Z' fill='white'/%3E%3Cpath d='M28.2828 13H27.2325V7.47607H25.3428V6.5752H30.1724V7.47607H28.2828V13Z' fill='white'/%3E%3Cpath d='M41.9472 13H40.8046L39.7148 9.16796C39.6679 9.00097 39.6093 8.76074 39.539 8.44727C39.4687 8.13086 39.4262 7.91113 39.4116 7.78809C39.3823 7.97559 39.3339 8.21875 39.2665 8.51758C39.2021 8.81641 39.1479 9.03905 39.1039 9.18554L38.0405 13H36.8979L36.0673 9.7832L35.2236 6.5752H36.2958L37.2143 10.3193C37.3578 10.9199 37.4604 11.4502 37.5219 11.9102C37.5541 11.6611 37.6025 11.3828 37.6669 11.0752C37.7314 10.7676 37.79 10.5186 37.8427 10.3281L38.8886 6.5752H39.9301L41.0024 10.3457C41.1049 10.6943 41.2133 11.2158 41.3276 11.9102C41.3715 11.4912 41.477 10.958 41.644 10.3105L42.558 6.5752H43.6215L41.9472 13Z' fill='white'/%3E%3Cpath d='M45.7957 13V6.5752H46.846V13H45.7957Z' fill='white'/%3E%3Cpath d='M52.0258 13H50.9755V7.47607H49.0859V6.5752H53.9155V7.47607H52.0258V13Z' fill='white'/%3E%3Cpath d='M61.2312 13H60.1765V10.104H57.2146V13H56.1643V6.5752H57.2146V9.20312H60.1765V6.5752H61.2312V13Z' fill='white'/%3E%3C/svg%3E");
    }

    @-webkit-keyframes formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- {

      0%,
      80%,
      100% {
        -webkit-transform: scale(0);
        -ms-transform: scale(0);
        transform: scale(0);
      }

      40% {
        -webkit-transform: scale(1);
        -ms-transform: scale(1);
        transform: scale(1);
      }
    }

    @keyframes formkit-bouncedelay-formkit-form-data-uid-b3e2fda9e7- {

      0%,
      80%,
      100% {
        -webkit-transform: scale(0);
        -ms-transform: scale(0);
        transform: scale(0);
      }

      40% {
        -webkit-transform: scale(1);
        -ms-transform: scale(1);
        transform: scale(1);
      }
    }

    .formkit-form[data-uid="b3e2fda9e7"] blockquote {
      padding: 10px 20px;
      margin: 0 0 20px;
      border-left: 5px solid #e1e1e1;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .seva-custom-content {
      padding: 15px;
      font-size: 16px;
      color: #fff;
      mix-blend-mode: difference;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-modal.guard {
      max-width: 420px;
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] {
      max-width: 700px;
    }

    .formkit-form[data-uid="b3e2fda9e7"] [data-style="clean"] {
      width: 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-fields {
      display: -webkit-box;
      display: -webkit-flex;
      display: -ms-flexbox;
      display: flex;
      -webkit-flex-wrap: wrap;
      -ms-flex-wrap: wrap;
      flex-wrap: wrap;
      margin: 0 auto;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      margin: 0 0 15px 0;
      -webkit-flex: 1 0 100%;
      -ms-flex: 1 0 100%;
      flex: 1 0 100%;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-powered-by-convertkit-container {
      margin: 0;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      position: static;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] [data-style="clean"],
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] [data-style="clean"] {
      padding: 10px;
      padding-top: 56px;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"],
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] {
      margin-left: -5px;
      margin-right: -5px;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-submit,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-submit {
      margin: 0 5px 15px 5px;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-field,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-field {
      -webkit-flex: 100 1 auto;
      -ms-flex: 100 1 auto;
      flex: 100 1 auto;
    }

    .formkit-form[data-uid="b3e2fda9e7"][min-width~="700"] .formkit-fields[data-stacked="false"] .formkit-submit,
    .formkit-form[data-uid="b3e2fda9e7"][min-width~="800"] .formkit-fields[data-stacked="false"] .formkit-submit {
      -webkit-flex: 1 1 auto;
      -ms-flex: 1 1 auto;
      flex: 1 1 auto;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-input {
      color: #727272;
      border-color: #b2b2b2;
      height: 28px;
      font-weight: 400;
    }

    .formkit-form[data-uid="b3e2fda9e7"] .formkit-submit {
      color: #ffffff;
      height: 27px;
      line-height: 4px;
      border-radius: 2px;
      font-weight: 400;
    }
  </style>
</form>

Text Content

Applied LLMs

 * About The Authors




ON THIS PAGE

 * Tactical: Nuts & bolts of working with LLMs
   * Prompting
     * Focus on getting the most out of fundamental prompting techniques
     * Structure your inputs and outputs
     * Have small prompts that do one thing, and only one thing, well
     * Craft your context tokens
   * Information Retrieval / RAG
     * The quality of your RAG’s output is dependent on the quality of retrieved
       documents, which in turn can be considered along a few factors
     * Don’t forget keyword search; use it as a baseline and in hybrid search
     * Prefer RAG over fine-tuning for new knowledge
     * Long-context models won’t make RAG obsolete
   * Tuning and optimizing workflows
     * Step-by-step, multi-turn “flows” can give large boosts
     * Prioritize deterministic workflows for now
     * Getting more diverse outputs beyond temperature
     * Caching is underrated
     * When to finetune
   * Evaluation & Monitoring
     * Create a few assertion-based unit tests from real input/output samples
     * LLM-as-Judge can work (somewhat), but it’s not a silver bullet
     * The “intern test” for evaluating generations
     * Overemphasizing certain evals can hurt overall performance
     * Simplify annotation to binary tasks or pairwise comparisons
     * (Reference-free) evals and guardrails can be used interchangeably
     * LLMs will return output even when they shouldn’t
     * Hallucinations are a stubborn problem
 * Operational: Day-to-day and org concerns
   * Data
     * Check for development-prod skew
     * Look at samples of LLM inputs and outputs every day
   * Working with models
     * Generate structured output to ease downstream integration
     * Migrating prompts across models is a pain in the ass
     * Version and pin your models
     * Choose the smallest model that gets the job done
   * Product
     * Involve design early and often
     * Design your UX for Human-In-The-Loop
     * Prioritize your hierarchy of needs ruthlessly
     * Calibrate your risk tolerance based on the use case
   * Team & Roles
     * Focus on process, not tools
     * Always be experimenting
     * Empower everyone to use new AI technology
     * Don’t fall into the trap of “AI Engineering is all I need”
 * Strategic: Long-term business strategy (pending)
   * Stay In Touch
   * Acknowledgements
   * About the authors


Subscribe


WHAT WE’VE LEARNED FROM A YEAR OF BUILDING WITH LLMS

A practical guide to building successful LLM products.
Authors

Eugene Yan

Bryan Bischof

Charles Frye

Hamel Husain

Jason Liu

Shreya Shankar

Published

June 8, 2024

> Also published on O’Reilly Media in three parts: Tactical, Operational,
> Strategic (pending).

It’s an exciting time to build with large language models (LLMs). Over the past
year, LLMs have become “good enough” for real-world applications. And they’re
getting better and cheaper every year. Coupled with a parade of demos on social
media, there will be an estimated $200B investment in AI by 2025. Furthermore,
provider APIs have made LLMs more accessible, allowing everyone, not just ML
engineers and scientists, to build intelligence into their products.
Nonetheless, while the barrier to entry for building with AI has been lowered,
creating products and systems that are effective—beyond a demo—remains
deceptively difficult.

We’ve spent the past year building, and have discovered many sharp edges along
the way. While we don’t claim to speak for the entire industry, we’d like to
share what we’ve learned to help you avoid our mistakes and iterate faster.
These are organized into three sections:

 * Tactical: Some practices for prompting, RAG, flow engineering, evals, and
   monitoring. Whether you’re a practitioner building with LLMs, or hacking on
   weekend projects, this section was written for you.
 * Operational: The organizational, day-to-day concerns of shipping products,
   and how to build an effective team. For product/technical leaders looking to
   deploy sustainably and reliably.
 * Strategic: The long-term, big-picture view, with opinionated takes such as
   “no GPU before PMF” and “focus on the system not the model”, and how to
   iterate. Written with founders and executives in mind.

Our intent is to make this a practical guide to building successful products
with LLMs, drawing from our own experiences and pointing to examples from around
the industry.

Ready to delve dive in? Let’s go.

--------------------------------------------------------------------------------


TACTICAL: NUTS & BOLTS OF WORKING WITH LLMS

In this section, we share some best practices for the core components of the
emerging LLM stack: prompting tips to improve quality and reliability,
evaluation strategies to assess output, retrieval-augmented generation ideas to
improve grounding, and more. We’ll also explore how to design human-in-the-loop
workflows. While the technology is still rapidly developing, we hope that these
lessons, the by-product of countless experiments we’ve collectively run, will
stand the test of time and help you build and ship robust LLM applications.


PROMPTING

We recommend starting with prompting when developing new applications. It’s easy
to both underestimate and overestimate its importance. It’s underestimated
because the right prompting techniques, when used correctly, can get us very
far. It’s overestimated because even prompt-based applications require
significant engineering around the prompt to work well.


FOCUS ON GETTING THE MOST OUT OF FUNDAMENTAL PROMPTING TECHNIQUES

A few prompting techniques have consistently helped with improving performance
across a variety of models and tasks: n-shot prompts + in-context learning,
chain-of-thought, and providing relevant resources.

The idea of in-context learning via n-shot prompts is to provide the LLM with a
few examples that demonstrate the task and align outputs to our expectations. A
few tips: - If n is too low, the model may over-anchor on those specific
examples, hurting its ability to generalize. As a rule of thumb, aim for n ≥ 5.
Don’t be afraid to go as high as a few dozen. - Examples should be
representative of the expected input distribution. If you’re building a movie
summarizer, include samples from different genres in roughly the same proportion
you’d expect to see in practice. - You don’t necessarily need to provide the
full input-output pairs. In many cases, examples of desired outputs are
sufficient. - If using an LLM which supports tool use, your n-shot examples
should also use the tools you want the agent to use.

In Chain-of-Thought (CoT) prompting, we encourage the LLM to explain its thought
process before returning the final answer. Think of it as providing the LLM with
a sketchpad so it doesn’t have to do it all in memory. The original approach was
to simply add the phrase “Let’s think step-by-step” as part of the instructions,
but, we’ve found it helpful to make the CoT more specific, where adding
specificity via an extra sentence or two often reduces hallucination rates
significantly. For example, when asking an LLM to summarize a meeting
transcript, we can be explicit about the steps, such as: - First, list out the
key decisions, follow-up items, and associated owners in a sketchpad. - Then,
check that the details in the sketchpad are factually consistent with the
transcript. - Finally, synthesize the key points into a concise summary.

Note that in recent times, some doubt has been cast on if this technique is as
powerful as believed. Additionally, there’s significant debate as to exactly
what is going on during inference when Chain-of-Thought is being used.
Regardless, this technique is one to experiment with when possible.

Providing relevant resources is a powerful mechanism to expand the model’s
knowledge base, reduce hallucinations, and increase the user’s trust. Often
accomplished via Retrieval Augmented Generation (RAG), providing the model with
snippets of text that it can directly utilize in its response is an essential
technique. When providing the relevant resources, it’s not enough to merely
include them; don’t forget to tell the model to prioritize their use, refer to
them directly, and sometimes to mention when none of the resources are
sufficient. These help “ground” agent responses to a corpus of resources.


STRUCTURE YOUR INPUTS AND OUTPUTS

Structured input and output help models better understand the input as well as
return output that can reliably integrate with downstream systems. Adding
serialization formatting to your inputs can help provide more clues to the model
as to the relationships between tokens in the context, additional metadata to
specific tokens (like types), or relate the request to similar examples in the
model’s training data.

As an example, many questions on the internet about writing SQL begin by
specifying the SQL schema. Thus, you may expect that effective prompting for
Text-to-SQL should include structured schema definitions; indeed

Structured output serves a similar purpose, but it also simplifies integration
into downstream components of your system. Instructor and Outlines work well for
structured output. (If you’re importing an LLM API SDK, use Instructor; if
you’re importing Huggingface for a self-hosted model, use Outlines.) Structured
input expresses tasks clearly and resembles how the training data is formatted,
increasing the probability of better output.

When using structured input, be aware that each LLM family has their own
preferences. Claude prefers <xml> while GPT favors Markdown and JSON. With XML,
you can even pre-fill Claude’s responses by providing a <response> tag like so.

messages=[
    {
        "role": "user",
        "content": """Extract the <name>, <size>, <price>, and <color> from this product description into your <response>.
            <description>The SmartHome Mini is a compact smart home assistant available in black or white for only $49.99. At just 5 inches wide, it lets you control lights, thermostats, and other connected devices via voice or app—no matter where you place it in your home. This affordable little hub brings convenient hands-free control to your smart devices.
            </description>"""
    },
    {
        "role": "assistant",
        "content": "<response><name>"
    }
]


HAVE SMALL PROMPTS THAT DO ONE THING, AND ONLY ONE THING, WELL

A common anti-pattern / code smell in software is the “God Object”, where we
have a single class or function that does everything. The same applies to
prompts too.

A prompt typically starts simple: A few sentences of instruction, a couple of
examples, and we’re good to go. But as we try to improve performance and handle
more edge cases, complexity creeps in. More instructions. Multi-step reasoning.
Dozens of examples. Before we know it, our initially simple prompt is now a
2,000 token frankenstein. And to add injury to insult, it has worse performance
on the more common and straightforward inputs! GoDaddy shared this challenge as
their No. 1 lesson from building with LLMs.

Just like how we strive (read: struggle) to keep our systems and code simple, so
should we for our prompts. Instead of having a single, catch-all prompt for the
meeting transcript summarizer, we can break it into steps to: - Extract key
decisions, action items, and owners into structured format - Check extracted
details against the original transcription for consistency - Generate a concise
summary from the structured details

As a result, we’ve split our single prompt into multiple prompts that are each
simple, focused, and easy to understand. And by breaking them up, we can now
iterate and eval each prompt individually.


CRAFT YOUR CONTEXT TOKENS

Rethink, and challenge your assumptions about how much context you actually need
to send to the agent. Be like Michaelangelo, do not build up your context
sculpture – chisel away the superfluous material until the sculpture is
revealed. RAG is a popular way to collate all of the potentially relevant blocks
of marble, but what are you doing to extract what’s necessary?

We’ve found that taking the final prompt sent to the model – with all of the
context construction, and meta-prompting, and RAG results – putting it on a
blank page and just reading it, really helps you rethink your context. We have
found redundancy, self-contradictory language, and poor formatting using this
method.

The other key optimization is the structure of your context. Your bag-of-docs
representation isn’t helpful for humans, don’t assume it’s any good for agents.
Think carefully about how you structure your context to underscore the
relationships between parts of it, and make extraction as simple as possible.


INFORMATION RETRIEVAL / RAG

Beyond prompting, another effective way to steer an LLM is by providing
knowledge as part of the prompt. This grounds the LLM on the provided context
which is then used for in-context learning. This is known as retrieval-augmented
generation (RAG). Practitioners have found RAG effective at providing knowledge
and improving output, while requiring far less effort and cost compared to
finetuning. RAG is only as good as the retrieved documents’ relevance, density,
and detail


THE QUALITY OF YOUR RAG’S OUTPUT IS DEPENDENT ON THE QUALITY OF RETRIEVED
DOCUMENTS, WHICH IN TURN CAN BE CONSIDERED ALONG A FEW FACTORS

The first and most obvious metric is relevance. This is typically quantified via
ranking metrics such as Mean Reciprocal Rank (MRR) or Normalized Discounted
Cumulative Gain (NDCG). MRR evaluates how well a system places the first
relevant result in a ranked list while NDCG considers the relevance of all the
results and their positions. They measure how good the system is at ranking
relevant documents higher and irrelevant documents lower. For example, if we’re
retrieving user summaries to generate movie review summaries, we’ll want to rank
reviews for the specific movie higher while excluding reviews for other movies.

Like traditional recommendation systems, the rank of retrieved items will have a
significant impact on how the LLM performs on downstream tasks. To measure the
impact, run a RAG-based task but with the retrieved items shuffled—how does the
RAG output perform?

Second, we also want to consider information density. If two documents are
equally relevant, we should prefer one that’s more concise and has lesser
extraneous details. Returning to our movie example, we might consider the movie
transcript and all user reviews to be relevant in a broad sense. Nonetheless,
the top-rated reviews and editorial reviews will likely be more dense in
information.

Finally, consider the level of detail provided in the document. Imagine we’re
building a RAG system to generate SQL queries from natural language. We could
simply provide table schemas with column names as context. But, what if we
include column descriptions and some representative values? The additional
detail could help the LLM better understand the semantics of the table and thus
generate more correct SQL.


DON’T FORGET KEYWORD SEARCH; USE IT AS A BASELINE AND IN HYBRID SEARCH

Given how prevalent the embedding-based RAG demo is, it’s easy to forget or
overlook the decades of research and solutions in information retrieval.

Nonetheless, while embeddings are undoubtedly a powerful tool, they are not the
be all and end all. First, while they excel at capturing high-level semantic
similarity, they may struggle with more specific, keyword-based queries, like
when users search for names (e.g., Ilya), acronyms (e.g., RAG), or IDs (e.g.,
claude-3-sonnet). Keyword-based search, such as BM25, are explicitly designed
for this. And after years of keyword-based search, users have likely taken it
for granted and may get frustrated if the document they expect to retrieve isn’t
being returned.

> Vector embeddings do not magically solve search. In fact, the heavy lifting is
> in the step before you re-rank with semantic similarity search. Making a
> genuine improvement over BM25 or full-text search is hard. — Aravind Srinivas,
> CEO Perplexity.ai

> We’ve been communicating this to our customers and partners for months now.
> Nearest Neighbor Search with naive embeddings yields very noisy results and
> you’re likely better off starting with a keyword-based approach. — Beyang Liu,
> CTO Sourcegraph

Second, it’s more straightforward to understand why a document was retrieved
with keyword search—we can look at the keywords that match the query. In
contrast, embedding-based retrieval is less interpretable. Finally, thanks to
systems like Lucene and OpenSearch that have been optimized and battle-tested
over decades, keyword search is usually more computationally efficient.

In most cases, a hybrid will work best: keyword matching for the obvious
matches, and embeddings for synonyms, hypernyms, and spelling errors, as well as
multimodality (e.g., images and text). Shortwave shared how they built their RAG
pipeline, including query rewriting, keyword + embedding retrieval, and ranking.


PREFER RAG OVER FINE-TUNING FOR NEW KNOWLEDGE

Both RAG and fine-tuning can be used to incorporate new information into LLMs
and increase performance on specific tasks. Thus, which should we try first?

Recent research suggests that RAG may have an edge. One study compared RAG
against unsupervised finetuning (aka continued pretraining), evaluating both on
a subset of MMLU and current events. They found that RAG consistently
outperformed fine-tuning for knowledge encountered during training as well as
entirely new knowledge. In another paper, they compared RAG against supervised
finetuning on an agricultural dataset. Similarly, the performance boost from RAG
was greater than fine-tuning, especially for GPT-4 (see Table 20 of the paper).

Beyond improved performance, RAG comes with several practical advantages too.
First, compared to continuous pretraining or fine-tuning, it’s easier—and
cheaper!—to keep retrieval indices up-to-date. Second, if our retrieval indices
have problematic documents that contain toxic or biased content, we can easily
drop or modify the offending documents.

In addition, the R in RAG provides finer grained control over how we retrieve
documents. For example, if we’re hosting a RAG system for multiple
organizations, by partitioning the retrieval indices, we can ensure that each
organization can only retrieve documents from their own index. This ensures that
we don’t inadvertently expose information from one organization to another.


LONG-CONTEXT MODELS WON’T MAKE RAG OBSOLETE

With Gemini 1.5 providing context windows of up to 10M tokens in size, some have
begun to question the future of RAG.

> I tend to believe that Gemini 1.5 is significantly overhyped by Sora. A
> context window of 10M tokens effectively makes most of existing RAG frameworks
> unnecessary — you simply put whatever your data into the context and talk to
> the model like usual. Imagine how it does to all the startups / agents /
> langchain projects where most of the engineering efforts goes to RAG 😅 Or in
> one sentence: the 10m context kills RAG. Nice work Gemini — Yao Fu

While it’s true that long contexts will be a game-changer for use cases such as
analyzing multiple documents or chatting with PDFs, the rumors of RAG’s demise
are greatly exaggerated.

First, even with a context window of 10M tokens, we’d still need a way to select
information to feed into the model. Second, beyond the narrow
needle-in-a-haystack eval, we’ve yet to see convincing data that models can
effectively reason over such a large context. Thus, without good retrieval (and
ranking), we risk overwhelming the model with distractors, or may even fill the
context window with completely irrelevant information.

Finally, there’s cost. The Transformer’s inference cost scales quadratically (or
linearly in both space and time) with context length. Just because there exists
a model that could read your organization’s entire Google Drive contents before
answering each question doesn’t mean that’s a good idea. Consider an analogy to
how we use RAM: we still read and write from disk, even though there exist
compute instances with RAM running into the tens of terabytes.

So don’t throw your RAGs in the trash just yet. This pattern will remain useful
even as context windows grow in size.


TUNING AND OPTIMIZING WORKFLOWS

Prompting an LLM is just the beginning. To get the most juice out of them, we
need to think beyond a single prompt and embrace workflows. For example, how
could we split a single complex task into multiple simpler tasks? When is
finetuning or caching helpful with increasing performance and reducing
latency/cost? In this section, we share proven strategies and real-world
examples to help you optimize and build reliable LLM workflows.


STEP-BY-STEP, MULTI-TURN “FLOWS” CAN GIVE LARGE BOOSTS

We already know that by decomposing a single big prompt into multiple smaller
prompts, we can achieve better results. An example of this is AlphaCodium: By
switching from a single prompt to a multi-step workflow, they increased GPT-4
accuracy (pass@5) on CodeContests from 19% to 44%. The workflow includes: -
Reflecting on the problem - Reasoning on the public tests - Generating possible
solutions - Ranking possible solutions - Generating synthetic tests - Iterating
on the solutions on public and synthetic tests.

Small tasks with clear objectives make for the best agent or flow prompts. It’s
not required that every agent prompt requests structured output, but structured
outputs help a lot to interface with whatever system is orchestrating the
agent’s interactions with the environment.

Some things to try: - An explicit planning step, as tightly specified as
possible. Consider having predefined plans to choose from. - Rewriting the
original user prompts into agent prompts. Be careful, this process is lossy! -
Agent behaviors as linear chains, DAGs, and State-Machines; different dependency
and logic relationships can be more and less appropriate for different scales.
Can you squeeze performance optimization out of different task architectures? -
Planning validations; your planning can include instructions on how to evaluate
the responses from other agents to make sure the final assembly works well
together. - Prompt engineering with fixed upstream state—make sure your agent
prompts are evaluated against a collection of variants of what may happen
before.


PRIORITIZE DETERMINISTIC WORKFLOWS FOR NOW

While AI agents can dynamically react to user requests and the environment,
their non-deterministic nature makes them a challenge to deploy. Each step an
agent takes has a chance of failing, and the chances of recovering from the
error are poor. Thus, the likelihood that an agent completes a multi-step task
successfully decreases exponentially as the number of steps increases. As a
result, teams building agents find it difficult to deploy reliable agents.

A promising approach is to have agent systems that produce deterministic plans
which are then executed in a structured, reproducible way. In the first step,
given a high-level goal or prompt, the agent generates a plan. Then, the plan is
executed deterministically. This allows each step to be more predictable and
reliable. Benefits include: - Generated plans can serve as few-shot samples to
prompt or finetune an agent. - Deterministic execution makes the system more
reliable, and thus easier to test and debug. Furthermore, failures can be traced
to the specific steps in the plan. - Generated plans can be represented as
directed acyclic graphs (DAGs) which are easier, relative to a static prompt, to
understand and adapt to new situations.

The most successful agent builders may be those with strong experience managing
junior engineers because the process of generating plans is similar to how we
instruct and manage juniors. We give juniors clear goals and concrete plans,
instead of vague open-ended directions, and we should do the same for our agents
too.

In the end, the key to reliable, working agents will likely be found in adopting
more structured, deterministic approaches, as well as collecting data to refine
prompts and finetune models. Without this, we’ll build agents that may work
exceptionally well some of the time, but on average, disappoint users which
leads to poor retention.


GETTING MORE DIVERSE OUTPUTS BEYOND TEMPERATURE

Suppose your task requires diversity in an LLM’s output. Maybe you’re writing an
LLM pipeline to suggest products to buy from your catalog given a list of
products the user bought previously. When running your prompt multiple times,
you might notice that the resulting recommendations are too similar—so you might
increase the temperature parameter in your LLM requests.

Briefly, increasing the temperature parameter makes LLM responses more varied.
At sampling time, the probability distributions of the next token become
flatter, meaning that tokens which are usually less likely get chosen more
often. Still, when increasing temperature, you may notice some failure modes
related to output diversity. For example, Some products from the catalog that
could be a good fit may never be output by the LLM. The same handful of products
might be overrepresented in outputs, if they are highly likely to follow the
prompt based on what the LLM has learned at training time. If the temperature is
too high, you may get outputs that reference nonexistent products (or
gibberish!)

In other words, increasing temperature does not guarantee that the LLM will
sample outputs from the probability distribution you expect (e.g., uniform
random). Nonetheless, we have other tricks to increase output diversity. The
simplest way is to adjust elements within the prompt. For example, if the prompt
template includes a list of items, such as historical purchases, shuffling the
order of these items each time they’re inserted into the prompt can make a
significant difference.

Additionally, keeping a short list of recent outputs can help prevent
redundancy. In our recommended products example, by instructing the LLM to avoid
suggesting items from this recent list, or by rejecting and resampling outputs
that are similar to recent suggestions, we can further diversify the responses.
Another effective strategy is to vary the phrasing used in the prompts. For
instance, incorporating phrases like “pick an item that the user would love
using regularly” or “select a product that the user would likely recommend to
friends” can shift the focus and thereby influence the variety of recommended
products.


CACHING IS UNDERRATED

Caching saves cost and eliminates generation latency by removing the need to
recompute responses for the same input. Furthermore, if a response has
previously been guardrailed, we can serve these vetted responses and reduce the
risk of serving harmful or inappropriate content.

One straightforward approach to caching is to use unique IDs for the items being
processed, such as if we’re summarizing new articles or product reviews. When a
request comes in, we can check to see if a summary already exists in the cache.
If so, we can return it immediately; if not, we generate, guardrail, and serve
it, and then store it in the cache for future requests.

For more open-ended queries, we can borrow techniques from the field of search,
which also leverages caching for open-ended inputs. Features like autocomplete
and spelling correction also help normalize user input and thus increase the
cache hit rate.


WHEN TO FINETUNE

We may have some tasks where even the most cleverly designed prompts fall short.
For example, even after significant prompt engineering, our system may still be
a ways from returning reliable, high-quality output. If so, then it may be
necessary to finetune a model for your specific task.

Successful examples include: - Honeycomb’s Natural Language Query Assistant:
Initially, the “programming manual” was provided in the prompt together with
n-shot examples for in-context learning. While this worked decently, fine-tuning
the model led to better output on the syntax and rules of the domain-specific
language. - Rechat’s Lucy: The LLM needed to generate responses in a very
specific format that combined structured and unstructured data for the frontend
to render correctly. Fine-tuning was essential to get it to work consistently.

Nonetheless, while fine-tuning can be effective, it comes with significant
costs. We have to annotate fine-tuning data, finetune and evaluate models, and
eventually self-host them. Thus, consider if the higher upfront cost is worth
it. If prompting gets you 90% of the way there, then fine-tuning may not be
worth the investment. However, if we do decide to finetune, to reduce the cost
of collecting human annotated data, we can generate and finetune on synthetic
data, or bootstrap on open-source data.


EVALUATION & MONITORING

Evaluating LLMs can be a minefield. The inputs and the outputs of LLMs are
arbitrary text, and the tasks we set them to are varied. Nonetheless, rigorous
and thoughtful evals are critical—it’s no coincidence that technical leaders at
OpenAI work on evaluation and give feedback on individual evals.

Evaluating LLM applications invites a diversity of definitions and reductions:
it’s simply unit testing, or it’s more like observability, or maybe it’s just
data science. We have found all of these perspectives useful. In the following
section, we provide some lessons we’ve learned about what is important in
building evals and monitoring pipelines.


CREATE A FEW ASSERTION-BASED UNIT TESTS FROM REAL INPUT/OUTPUT SAMPLES

Create unit tests (i.e., assertions) consisting of samples of inputs and outputs
from production, with expectations for outputs based on at least three criteria.
While three criteria might seem arbitrary, it’s a practical number to start
with; fewer might indicate that your task isn’t sufficiently defined or is too
open-ended, like a general-purpose chatbot. These unit tests, or assertions,
should be triggered by any changes to the pipeline, whether it’s editing a
prompt, adding new context via RAG, or other modifications. This write-up has an
example of an assertion-based test for an actual use case.

Consider beginning with assertions that specify phrases or ideas to either
include or exclude in all responses. Also consider checks to ensure that word,
item, or sentence counts lie within a range. For other kinds of generation,
assertions can look different. Execution-evaluation is a powerful method for
evaluating code-generation, wherein you run the generated code and determine
that the state of runtime is sufficient for the user-request.

As an example, if the user asks for a new function named foo; then after
executing the agent’s generated code, foo should be callable! One challenge in
execution-evaluation is that the agent code frequently leaves the runtime in
slightly different form than the target code. It can be effective to “relax”
assertions to the absolute most weak assumptions that any viable answer would
satisfy.

Finally, using your product as intended for customers (i.e., “dogfooding”) can
provide insight into failure modes on real-world data. This approach not only
helps identify potential weaknesses, but also provides a useful source of
production samples that can be converted into evals.


LLM-AS-JUDGE CAN WORK (SOMEWHAT), BUT IT’S NOT A SILVER BULLET

LLM-as-Judge, where we use a strong LLM to evaluate the output of other LLMs,
has been met with skepticism by some. (Some of us were initially huge skeptics.)
Nonetheless, when implemented well, LLM-as-Judge achieves decent correlation
with human judgements, and can at least help build priors about how a new prompt
or technique may perform. Specifically, when doing pairwise comparisons (e.g.,
control vs. treatment), LLM-as-Judge typically gets the direction right though
the magnitude of the win/loss may be noisy.

Here are some suggestions to get the most out of LLM-as-Judge: - Use pairwise
comparisons: Instead of asking the LLM to score a single output on a Likert
scale, present it with two options and ask it to select the better one. This
tends to lead to more stable results. - Control for position bias: The order of
options presented can bias the LLM’s decision. To mitigate this, do each
pairwise comparison twice, swapping the order of pairs each time. Just be sure
to attribute wins to the right option after swapping! - Allow for ties: In some
cases, both options may be equally good. Thus, allow the LLM to declare a tie so
it doesn’t have to arbitrarily pick a winner. - Use Chain-of-Thought: Asking the
LLM to explain its decision before giving a final preference can increase eval
reliability. As a bonus, this allows you to use a weaker but faster LLM and
still achieve similar results. Because frequently this part of the pipeline is
in batch mode, the extra latency from CoT isn’t a problem. - Control for
response length: LLMs tend to bias toward longer responses. To mitigate this,
ensure response pairs are similar in length.

One particularly powerful application of LLM-as-Judge is checking a new
prompting strategy against regression. If you have tracked a collection of
production results, sometimes you can rerun those production examples with a new
prompting strategy, and use LLM-as-Judge to quickly assess where the new
strategy may suffer.

Here’s an example of a simple but effective approach to iterate on LLM-as-Judge,
where we simply log the LLM response, judge’s critique (i.e., CoT), and final
outcome. They are then reviewed with stakeholders to identify areas for
improvement. Over three iterations, agreement with human and LLM improved from
68% to 94%!



LLM-as-Judge is not a silver bullet though. There are subtle aspects of language
where even the strongest models fail to evaluate reliably. In addition, we’ve
found that conventional classifiers and reward models can achieve higher
accuracy than LLM-as-Judge, and with lower cost and latency. For code
generation, LLM-as-Judge can be weaker than more direct evaluation strategies
like execution-evaluation.


THE “INTERN TEST” FOR EVALUATING GENERATIONS

We like to use the following “intern test” when evaluating generations: If you
took the exact input to the language model, including the context, and gave it
to an average college student in the relevant major as a task, could they
succeed? How long would it take?

If the answer is no because the LLM lacks the required knowledge, consider ways
to enrich the context.

If the answer is no and we simply can’t improve the context to fix it, then we
may have hit a task that’s too hard for contemporary LLMs.

If the answer is yes, but it would take a while, we can try to reduce the
complexity of the task. Is it decomposable? Are there aspects of the task that
can be made more templatized?

If the answer is yes, they would get it quickly, then it’s time to dig into the
data. What’s the model doing wrong? Can we find a pattern of failures? Try
asking the model to explain itself before or after it responds, to help you
build a theory of mind.


OVEREMPHASIZING CERTAIN EVALS CAN HURT OVERALL PERFORMANCE

“When a measure becomes a target, it ceases to be a good measure.” — Goodhart’s
Law.

An example of this is the Needle-in-a-Haystack (NIAH) eval. The original eval
helped quantify model recall as context sizes grew, as well as how recall is
affected by needle position. However, it’s been so overemphasized that it’s
featured as Figure 1 for Gemini 1.5’s report. The eval involves inserting a
specific phrase (“The special magic {city} number is: {number}”) into a long
document which repeats the essays of Paul Graham, and then prompting the model
to recall the magic number.

While some models achieve near-perfect recall, it’s questionable whether NIAH
truly reflects the reasoning and recall abilities needed in real-world
applications. Consider a more practical scenario: Given the transcript of an
hour-long meeting, can the LLM summarize the key decisions and next steps, as
well as correctly attribute each item to the relevant person? This task is more
realistic, going beyond rote memorization and also considering the ability to
parse complex discussions, identify relevant information, and synthesize
summaries.

Here’s an example of a practical NIAH eval. Using transcripts of doctor-patient
video calls, the LLM is queried about the patient’s medication. It also includes
a more challenging NIAH, inserting a phrase for random ingredients for pizza
toppings, such as “The secret ingredients needed to build the perfect pizza are:
Espresso-soaked dates, Lemon and Goat cheese.”. Recall was around 80% on the
medication task and 30% on the pizza task.



Tangentially, an overemphasis on NIAH evals can lead to lower performance on
extraction and summarization tasks. Because these LLMs are so finetuned to
attend to every sentence, they may start to treat irrelevant details and
distractors as important, thus including them in the final output (when they
shouldn’t!)

This could also apply to other evals and use cases. For example, summarization.
An emphasis on factual consistency could lead to summaries that are less
specific (and thus less likely to be factually inconsistent) and possibly less
relevant. Conversely, an emphasis on writing style and eloquence could lead to
more flowery, marketing-type language that could introduce factual
inconsistencies.


SIMPLIFY ANNOTATION TO BINARY TASKS OR PAIRWISE COMPARISONS

Providing open-ended feedback or ratings for model output on a Likert scale is
cognitively demanding. As a result, the data collected is more noisy—due to
variability among human raters—and thus less useful. A more effective approach
is to simplify the task and reduce the cognitive burden on annotators. Two tasks
that work well are binary classifications and pairwise comparisons.

In binary classifications, annotators are asked to make a simple yes-or-no
judgment on the model’s output. They might be asked whether the generated
summary is factually consistent with the source document, or whether the
proposed response is relevant, or if it contains toxicity. Compared to the
Likert scale, binary decisions are more precise, have higher consistency among
raters, and lead to higher throughput. This was how Doordash setup their
labeling queues for tagging menu items though a tree of yes-no questions.

In pairwise comparisons, the annotator is presented with a pair of model
responses and asked which is better. Because it’s easier for humans to say “A is
better than B” than to assign an individual score to either A or B individually,
this leads to faster and more reliable annotations (over Likert scales). At a
Llama2 meetup, Thomas Scialom, an author on the Llama2 paper, confirmed that
pairwise-comparisons were faster and cheaper than collecting supervised
finetuning data such as written responses. The former’s cost is $3.5 per unit
while the latter’s cost is $25 per unit.

If you’re starting to write labeling guidelines, here are some reference
guidelines from Google and Bing Search.


(REFERENCE-FREE) EVALS AND GUARDRAILS CAN BE USED INTERCHANGEABLY

Guardrails help to catch inappropriate or harmful content while evals help to
measure the quality and accuracy of the model’s output. In the case of
reference-free evals, they may be considered two sides of the same coin.
Reference-free evals are evaluations that don’t rely on a “golden” reference,
such as a human-written answer, and can assess the quality of output based
solely on the input prompt and the model’s response.

Some examples of these are summarization evals, where we only have to consider
the input document to evaluate the summary on factual consistency and relevance.
If the summary scores poorly on these metrics, we can choose not to display it
to the user, effectively using the eval as a guardrail. Similarly,
reference-free translation evals can assess the quality of a translation without
needing a human-translated reference, again allowing us to use it as a
guardrail.


LLMS WILL RETURN OUTPUT EVEN WHEN THEY SHOULDN’T

A key challenge when working with LLMs is that they’ll often generate output
even when they shouldn’t. This can lead to harmless but nonsensical responses,
or more egregious defects like toxicity or dangerous content. For example, when
asked to extract specific attributes or metadata from a document, an LLM may
confidently return values even when those values don’t actually exist.
Alternatively, the model may respond in a language other than English because we
provided non-English documents in the context.

While we can try to prompt the LLM to return a “not applicable” or “unknown”
response, it’s not foolproof. Even when the log probabilities are available,
they’re a poor indicator of output quality. While log probs indicate the
likelihood of a token appearing in the output, they don’t necessarily reflect
the correctness of the generated text. On the contrary, for instruction-tuned
models that are trained to respond to queries and generate coherent response,
log probabilities may not be well-calibrated. Thus, while a high log probability
may indicate that the output is fluent and coherent, it doesn’t mean it’s
accurate or relevant.

While careful prompt engineering can help to some extent, we should complement
it with robust guardrails that detect and filter/regenerate undesired output.
For example, OpenAI provides a content moderation API that can identify unsafe
responses such as hate speech, self-harm, or sexual output. Similarly, there are
numerous packages for detecting personally identifiable information (PII). One
benefit is that guardrails are largely agnostic of the use case and can thus be
applied broadly to all output in a given language. In addition, with precise
retrieval, our system can deterministically respond “I don’t know” if there are
no relevant documents.

A corollary here is that LLMs may fail to produce outputs when they are expected
to. This can happen for various reasons, from straightforward issues like long
tail latencies from API providers to more complex ones such as outputs being
blocked by content moderation filters. As such, it’s important to consistently
log inputs and (potentially a lack of) outputs for debugging and monitoring.


HALLUCINATIONS ARE A STUBBORN PROBLEM

Unlike content safety or PII defects which have a lot of attention and thus
seldom occur, factual inconsistencies are stubbornly persistent and more
challenging to detect. They’re more common and occur at a baseline rate of 5 -
10%, and from what we’ve learned from LLM providers, it can be challenging to
get it below 2%, even on simple tasks such as summarization.

To address this, we can combine prompt engineering (upstream of generation) and
factual inconsistency guardrails (downstream of generation). For prompt
engineering, techniques like CoT help reduce hallucination by getting the LLM to
explain its reasoning before finally returning the output. Then, we can apply a
factual inconsistency guardrail to assess the factuality of summaries and filter
or regenerate hallucinations. In some cases, hallucinations can be
deterministically detected. When using resources from RAG retrieval, if the
output is structured and identifies what the resources are, you should be able
to manually verify they’re sourced from the input context.


OPERATIONAL: DAY-TO-DAY AND ORG CONCERNS


DATA

Just as the quality of ingredients determines the dish’s taste, the quality of
input data constrains the performance of machine learning systems. In addition,
output data is the only way to tell whether the product is working or not. All
the authors focus tightly on the data, looking at inputs and outputs for several
hours a week to better understand the data distribution: its modes, its edge
cases, and the limitations of models of it.


CHECK FOR DEVELOPMENT-PROD SKEW

A common source of errors in traditional machine learning pipelines is
train-serve skew. This happens when the data used in training differs from what
the model encounters in production. Although we can use LLMs without training or
fine-tuning, hence there’s no training set, a similar issue arises with
development-prod data skew. Essentially, the data we test our systems on during
development should mirror what the systems will face in production. If not, we
might find our production accuracy suffering.

LLM development-prod skew can be categorized into two types: structural and
content-based. Structural skew includes issues like formatting discrepancies,
such as differences between a JSON dictionary with a list-type value and a JSON
list, inconsistent casing, and errors like typos or sentence fragments. These
errors can lead to unpredictable model performance because different LLMs are
trained on specific data formats, and prompts can be highly sensitive to minor
changes. Content-based or “semantic” skew refers to differences in the meaning
or context of the data. 

As in traditional ML, it’s useful to periodically measure skew between the LLM
input/output pairs. Simple metrics like the length of inputs and outputs or
specific formatting requirements (e.g., JSON or XML) are straightforward ways to
track changes. For more “advanced” drift detection, consider clustering
embeddings of input/output pairs to detect semantic drift, such as shifts in the
topics users are discussing, which could indicate they are exploring areas the
model hasn’t been exposed to before. 

When testing changes, such as prompt engineering, ensure that hold-out datasets
are current and reflect the most recent types of user interactions. For example,
if typos are common in production inputs, they should also be present in the
hold-out data. Beyond just numerical skew measurements, it’s beneficial to
perform qualitative assessments on outputs. Regularly reviewing your model’s
outputs—a practice colloquially known as “vibe checks”—ensures that the results
align with expectations and remain relevant to user needs. Finally,
incorporating nondeterminism into skew checks is also useful—by running the
pipeline multiple times for each input in our testing dataset and analyzing all
outputs, we increase the likelihood of catching anomalies that might occur only
occasionally.


LOOK AT SAMPLES OF LLM INPUTS AND OUTPUTS EVERY DAY

LLMs are dynamic and constantly evolving. Despite their impressive zero-shot
capabilities and often delightful outputs, their failure modes can be highly
unpredictable. For custom tasks, regularly reviewing data samples is essential
to developing an intuitive understanding of how LLMs perform.

Input-output pairs from production are the “real things, real places” (genchi
genbutsu) of LLM applications, and they cannot be substituted. Recent research
highlighted that developers’ perceptions of what constitutes “good” and “bad”
outputs shift as they interact with more data (i.e., criteria drift). While
developers can come up with some criteria upfront for evaluating LLM outputs,
these predefined criteria are often incomplete. For instance, during the course
of development, we might update the prompt to increase the probability of good
responses and decrease the probability of bad ones. This iterative process of
evaluation, reevaluation, and criteria update is necessary, as it’s difficult to
predict either LLM behavior or human preference without directly observing the
outputs.

To manage this effectively, we should log LLM inputs and outputs. By examining a
sample of these logs daily, we can quickly identify and adapt to new patterns or
failure modes. When we spot a new issue, we can immediately write an assertion
or eval around it. Similarly, any updates to failure mode definitions should be
reflected in the evaluation criteria. These “vibe checks” are signals of bad
outputs; code and assertions operationalize them. Finally, this attitude must be
socialized, for example by adding review or annotation of inputs and outputs to
your on-call rotation.


WORKING WITH MODELS

With LLM APIs, we can rely on intelligence from a handful of providers. While
this is a boon, these dependencies also involve trade-offs on performance,
latency, throughput, and cost. Also, as newer, better models drop (almost every
month in the past year), we should be prepared to update our products as we
deprecate old models and migrate to newer models. In this section, we share our
lessons from working with technologies we don’t have full control over, where
the models can’t be self-hosted and managed.


GENERATE STRUCTURED OUTPUT TO EASE DOWNSTREAM INTEGRATION

For most real-world use cases, the output of an LLM will be consumed by a
downstream application via some machine-readable format. For example, ReChat, a
real-estate CRM, required structured responses for the front end to render
widgets. Similarly, Boba, a tool for generating product strategy ideas, needed
structured output with fields for title, summary, plausibility score, and time
horizon. Finally, LinkedIn shared about constraining the LLM to generate YAML,
which is then used to decide which skill to use, as well as provide the
parameters to invoke the skill.

This application pattern is an extreme version of Postel’s Law: be liberal in
what you accept (arbitrary natural language) and conservative in what you send
(typed, machine-readable objects). As such, we expect it to be extremely
durable.

Currently, Instructor and Outlines are the de facto standards for coaxing
structured output from LLMs. If you’re using an LLM API (e.g., Anthropic,
OpenAI), use Instructor; if you’re working with a self-hosted model (e.g.,
Huggingface), use Outlines.


MIGRATING PROMPTS ACROSS MODELS IS A PAIN IN THE ASS

Sometimes, our carefully crafted prompts work superbly with one model but fall
flat with another. This can happen when we’re switching between various model
providers, as well as when we upgrade across versions of the same model. 

For example, Voiceflow found that migrating from gpt-3.5-turbo-0301 to
gpt-3.5-turbo-1106 led to a 10% drop on their intent classification task.
(Thankfully, they had evals!) Similarly, GoDaddy observed a trend in the
positive direction, where upgrading to version 1106 narrowed the performance gap
between gpt-3.5-turbo and gpt-4. (Or, if you’re a glass-half-full person, you
might be disappointed that gpt-4’s lead was reduced with the new upgrade)

Thus, if we have to migrate prompts across models, expect it to take more time
than simply swapping the API endpoint. Don’t assume that plugging in the same
prompt will lead to similar or better results. Also, having reliable, automated
evals helps with measuring task performance before and after migration, and
reduces the effort needed for manual verification.


VERSION AND PIN YOUR MODELS

In any machine learning pipeline, “changing anything changes everything”. This
is particularly relevant as we rely on components like large language models
(LLMs) that we don’t train ourselves and that can change without our knowledge.

Fortunately, many model providers offer the option to “pin” specific model
versions (e.g., gpt-4-turbo-1106). This enables us to use a specific version of
the model weights, ensuring they remain unchanged. Pinning model versions in
production can help avoid unexpected changes in model behavior, which could lead
to customer complaints about issues that may crop up when a model is swapped,
such as overly verbose outputs or other unforeseen failure modes.

Additionally, consider maintaining a shadow pipeline that mirrors your
production setup but uses the latest model versions. This enables safe
experimentation and testing with new releases. Once you’ve validated the
stability and quality of the outputs from these newer models, you can
confidently update the model versions in your production environment.


CHOOSE THE SMALLEST MODEL THAT GETS THE JOB DONE

When working on a new application, it’s tempting to use the biggest, most
powerful model available. But once we’ve established that the task is
technically feasible, it’s worth experimenting if a smaller model can achieve
comparable results.

The benefits of a smaller model are lower latency and cost. While it may be
weaker, techniques like chain-of-thought, n-shot prompts, and in-context
learning can help smaller models punch above their weight. Beyond LLM APIs,
fine-tuning our specific tasks can also help increase performance.

Taken together, a carefully crafted workflow using a smaller model can often
match, or even surpass, the output quality of a single large model, while being
faster and cheaper. For example, this tweet shares anecdata of how Haiku +
10-shot prompt outperforms zero-shot Opus and GPT-4. In the long term, we expect
to see more examples of flow-engineering with smaller models as the optimal
balance of output quality, latency, and cost.

As another example, take the humble classification task. Lightweight models like
DistilBERT (67M parameters) are a surprisingly strong baseline. The 400M
parameter DistilBART is another great option—when finetuned on open-source data,
it could identify hallucinations with an ROC-AUC of 0.84, surpassing most LLMs
at less than 5% of latency and cost.

The point is, don’t overlook smaller models. While it’s easy to throw a massive
model at every problem, with some creativity and experimentation, we can often
find a more efficient solution. 


PRODUCT

While new technology offers new possibilities, the principles of building great
products are timeless. Thus, even if we’re solving new problems for the first
time, we don’t have to reinvent the wheel on product design. There’s a lot to
gain from grounding our LLM application development in solid product
fundamentals, allowing us to deliver real value to the people we serve.


INVOLVE DESIGN EARLY AND OFTEN

Having a designer will push you to understand and think deeply about how your
product can be built and presented to users. We sometimes stereotype designers
as folks who take things and make them pretty. But beyond just the user
interface, they also rethink how the user experience can be improved, even if it
means breaking existing rules and paradigms.

Designers are especially gifted at reframing the user’s needs into various
forms. Some of these forms are more tractable to solve than others, and thus,
they may offer more or fewer opportunities for AI solutions. Like many other
products, building AI products should be centered around the job to be done, not
the technology that powers them.

Focus on asking yourself: “What job is the user asking this product to do for
them? Is that job something a chatbot would be good at? How about autocomplete?
Maybe something different!” Consider the existing design patterns and how they
relate to the job-to-be-done. These are the invaluable assets that designers add
to your team’s capabilities.


DESIGN YOUR UX FOR HUMAN-IN-THE-LOOP

One way to get quality annotations is to integrate Human-in-the-Loop (HITL) into
the user experience (UX). By allowing users to provide feedback and corrections
easily, we can improve the immediate output and collect valuable data to improve
our models.

Imagine an e-commerce platform where users upload and categorize their products.
There are several ways we could design the UX:

 * The user manually selects the right product category; an LLM periodically
   checks new products and corrects miscategorization on the backend.
 * The user doesn’t select any category at all; an LLM periodically categorizes
   products on the backend (with potential errors).
 * An LLM suggests a product category in real-time, which the user can validate
   and update as needed.

While all three approaches involve an LLM, they provide very different UXes. The
first approach puts the initial burden on the user and has the LLM acting as a
post-processing check. The second requires zero effort from the user but
provides no transparency or control. The third strikes the right balance. By
having the LLM suggest categories upfront, we reduce cognitive load on the user
and they don’t have to learn our taxonomy to categorize their product! At the
same time, by allowing the user to review and edit the suggestion, they have the
final say in how their product is classified, putting control firmly in their
hands. As a bonus, the third approach creates a natural feedback loop for model
improvement. Suggestions that are good are accepted (positive labels) and those
that are bad are updated (negative followed by positive labels).

This pattern of suggestion, user validation, and data collection is commonly
seen in several applications:

 * Coding assistants: Where users can accept a suggestion (strong positive),
   accept and tweak a suggestion (positive), or ignore a suggestion (negative)
 * Midjourney: Where users can choose to upscale and download the image (strong
   positive), vary an image (positive), or generate a new set of images
   (negative)
 * Chatbots: Where users can provide thumbs up (positive) or thumbs down
   (negative) on responses, or choose to regenerate a response if it was really
   bad (strong negative).

Feedback can be explicit or implicit. Explicit feedback is information users
provide in response to a request by our product; implicit feedback is
information we learn from user interactions without needing users to
deliberately provide feedback. Coding assistants and Midjourney are examples of
implicit feedback while thumbs up and thumb downs are explicit feedback. If we
design our UX well, like coding assistants and Midjourney, we can collect plenty
of implicit feedback to improve our product and models.


PRIORITIZE YOUR HIERARCHY OF NEEDS RUTHLESSLY

As we think about putting our demo into production, we’ll have to think about
the requirements for:

 * Reliability: 99.9% uptime, adherence to structured output
 * Harmlessness: Not generate offensive, NSFW, or otherwise harmful content
 * Factual consistency: Being faithful to the context provided, not making
   things up
 * Usefulness: Relevant to the users’ needs and request
 * Scalability: Latency SLAs, supported throughput
 * Cost: Because we don’t have unlimited budget
 * And more: Security, privacy, fairness, GDPR, DMA, etc, etc.

If we try to tackle all these requirements at once, we’re never going to ship
anything. Thus, we need to prioritize. Ruthlessly. This means being clear what
is non-negotiable (e.g., reliability, harmlessness) without which our product
can’t function or won’t be viable. It’s all about identifying the minimum
lovable product. We have to accept that the first version won’t be perfect, and
just launch and iterate.


CALIBRATE YOUR RISK TOLERANCE BASED ON THE USE CASE

When deciding on the language model and level of scrutiny of an application,
consider the use case and audience. For a customer-facing chatbot offering
medical or financial advice, we’ll need a very high bar for safety and accuracy.
Mistakes or bad output could cause real harm and erode trust. But for less
critical applications, such as a recommender system, or internal-facing
applications like content classification or summarization, excessively strict
requirements only slow progress without adding much value.

This aligns with a recent a16z report showing that many companies are moving
faster with internal LLM applications compared to external ones. By
experimenting with AI for internal productivity, organizations can start
capturing value while learning how to manage risk in a more controlled
environment. Then, as they gain confidence, they can expand to customer-facing
use cases.


TEAM & ROLES

No job function is easy to define, but writing a job description for the work in
this new space is more challenging than others. We’ll forgo venn diagrams of
intersecting job titles, or suggestions for job descriptions. We will, however,
submit to the existence of a new role—the AI engineer—and discuss its place.
Importantly, we’ll discuss the rest of the team and how responsibilities should
be assigned.


FOCUS ON PROCESS, NOT TOOLS

When faced with new paradigms, such as LLMs, software engineers tend to favor
tools. As a result, we overlook the problem and process the tool was supposed to
solve. In doing so, many engineers assume accidental complexity, which has
negative consequences for the team’s long-term productivity.

For example, this write-up discusses how certain tools can automatically create
prompts for large language models. It argues (rightfully IMHO) that engineers
who use these tools without first understanding the problem-solving methodology
or process end up taking on unnecessary technical debt.

In addition to accidental complexity, tools are often underspecified. For
example, there is a growing industry of LLM evaluation tools that offer “LLM
Evaluation In A Box” with generic evaluators for toxicity, conciseness, tone,
etc. We have seen many teams adopt these tools without thinking critically about
the specific failure modes of their domains. Contrast this to EvalGen. It
focuses on teaching users the process of creating domain-specific evals by
deeply involving the user each step of the way, from specifying criteria, to
labeling data, to checking evals. The software leads the user through a workflow
that looks like this:



Shankar, S., et al. (2024). Who Validates the Validators? Aligning LLM-Assisted
Evaluation of LLM Outputs with Human Preferences. Retrieved from
https://arxiv.org/abs/2404.12272

EvalGen guides the user through a best practice of crafting LLM evaluations,
namely:

 1. Defining domain-specific tests (bootstrapped automatically from the prompt).
    These are defined as either assertions with code or with LLM-as-a-Judge.
 2. The importance of aligning the tests with human judgment, so that the user
    can check that the tests capture the specified criteria.
 3. Iterating on your tests as the system (prompts, etc) changes. 

EvalGen provides developers with a mental model of the evaluation building
process without anchoring them to a specific tool. We have found that after
providing AI Engineers with this context, they often decide to select leaner
tools or build their own.  

There are too many components of LLMs beyond prompt writing and evaluations to
list exhaustively here.  However, it is important that AI Engineers seek to
understand the processes before adopting tools.


ALWAYS BE EXPERIMENTING

ML products are deeply intertwined with experimentation. Not only the A/B,
Randomized Control Trials kind, but the frequent attempts at modifying the
smallest possible components of your system, and doing offline evaluation. The
reason why everyone is so hot for evals is not actually about trustworthiness
and confidence—it’s about enabling experiments! The better your evals, the
faster you can iterate on experiments, and thus the faster you can converge on
the best version of your system. 

It’s common to try different approaches to solving the same problem because
experimentation is so cheap now. The high-cost of collecting data and training a
model is minimized—prompt engineering costs little more than human time.
Position your team so that everyone is taught the basics of prompt engineering.
This encourages everyone to experiment and leads to diverse ideas from across
the organization.

Additionally, don’t only experiment to explore—also use them to exploit! Have a
working version of a new task? Consider having someone else on the team approach
it differently. Try doing it another way that’ll be faster. Investigate prompt
techniques like Chain-of-Thought or Few-Shot to make it higher quality. Don’t
let your tooling hold you back on experimentation; if it is, rebuild it, or buy
something to make it better. 

Finally, during product/project planning, set aside time for building evals and
running multiple experiments. Think of the product spec for engineering
products, but add to it clear criteria for evals. And during roadmapping, don’t
underestimate the time required for experimentation—expect to do multiple
iterations of development and evals before getting the green light for
production.


EMPOWER EVERYONE TO USE NEW AI TECHNOLOGY

As generative AI increases in adoption, we want the entire team—not just the
experts—to understand and feel empowered to use this new technology. There’s no
better way to develop intuition for how LLMs work (e.g., latencies, failure
modes, UX) than to, well, use them. LLMs are relatively accessible: You don’t
need to know how to code to improve performance for a pipeline, and everyone can
start contributing via prompt engineering and evals.

A big part of this is education. It can start as simple as the basics of prompt
engineering, where techniques like n-shot prompting and CoT help condition the
model towards the desired output. Folks who have the knowledge can also educate
about the more technical aspects, such as how LLMs are autoregressive in nature.
In other words, while input tokens are processed in parallel, output tokens are
generated sequentially. As a result, latency is more a function of output length
than input length—this is a key consideration when designing UXes and setting
performance expectations.

We can also go further and provide opportunities for hands-on experimentation
and exploration. A hackathon perhaps? While it may seem expensive to have an
entire team spend a few days hacking on speculative projects, the outcomes may
surprise you. We know of a team that, through a hackathon, accelerated and
almost completed their three-year roadmap within a year. Another team had a
hackathon that led to paradigm shifting UXes that are now possible thanks to
LLMs, which are now prioritized for the year and beyond.


DON’T FALL INTO THE TRAP OF “AI ENGINEERING IS ALL I NEED”

As new job titles are coined, there is an initial tendency to overstate the
capabilities associated with these roles. This often results in a painful
correction as the actual scope of these jobs becomes clear. Newcomers to the
field, as well as hiring managers, might make exaggerated claims or have
inflated expectations. Notable examples over the last decade include:

 * Data Scientist: “someone who is better at statistics than any software
   engineer and better at software engineering than any statistician.”  
 * Machine Learning Engineer (MLE): a software engineering-centric view of
   machine learning 

Initially, many assumed that data scientists alone were sufficient for
data-driven projects. However, it became apparent that data scientists must
collaborate with software and data engineers to develop and deploy data products
effectively. 

This misunderstanding has shown up again with the new role of AI Engineer, with
some teams believing that AI Engineers are all you need. In reality, building
machine learning or AI products requires a broad array of specialized roles.
We’ve consulted with more than a dozen companies on AI products and have
consistently observed that they fall into the trap of believing that “AI
Engineering is all you need.” As a result, products often struggle to scale
beyond a demo as companies overlook crucial aspects involved in building a
product.

For example, evaluation and measurement are crucial for scaling a product beyond
vibe checks. The skills for effective evaluation align with some of the
strengths traditionally seen in machine learning engineers—a team composed
solely of AI Engineers will likely lack these skills. Co-author Hamel Husain
illustrates the importance of these skills in his recent work around detecting
data drift and designing domain-specific evals.

Here is a rough progression of the types of roles you need, and when you’ll need
them, throughout the journey of building an AI product:

 1. First, focus on building a product. This might include an AI engineer, but
    it doesn’t have to. AI Engineers are valuable for prototyping and iterating
    quickly on the product (UX, plumbing, etc). 
 2. Next, create the right foundations by instrumenting your system and
    collecting data. Depending on the type and scale of data, you might need
    platform and/or data engineers. You must also have systems for querying and
    analyzing this data to debug issues.
 3. Next, you will eventually want to optimize your AI system. This doesn’t
    necessarily involve training models. The basics include steps like designing
    metrics, building evaluation systems, running experiments, optimizing RAG
    retrieval, debugging stochastic systems, and more. MLEs are really good at
    this (though AI engineers can pick them up too). It usually doesn’t make
    sense to hire an MLE unless you have completed the prerequisite steps.

Aside from this, you need a domain expert at all times. At small companies, this
would ideally be the founding team—and at bigger companies, product managers can
play this role. Being aware of the progression and timing of roles is critical.
Hiring folks at the wrong time (e.g., hiring an MLE too early) or building in
the wrong order is a waste of time and money, and causes churn.  Furthermore,
regularly checking in with an MLE (but not hiring them full-time) during phases
1-2 will help the company build the right foundations. 


STRATEGIC: LONG-TERM BUSINESS STRATEGY (PENDING)

PENDING RELEASE (tentatively 6th June)

--------------------------------------------------------------------------------


STAY IN TOUCH

If you found this useful and want updates on write-ups, courses, and activities,
subscribe below.


Subscribe

You can also find our individual contact information on our about page.


ACKNOWLEDGEMENTS

This series started as a conversation in a group chat, where Bryan quipped that
he was inspired to write “A Year of AI Engineering”. Then, ✨magic✨ happened, and
we were all inspired to chip in and share what we’ve learned so far.

The authors would like to thank Eugene for leading the bulk of the document
integration and overall structure in addition to a large proportion of the
lessons. Additionally, for primary editing responsibilities and document
direction. The authors would like to thank Bryan for the spark that led to this
writeup, restructuring the write-up into tactical, operational, and strategic
sections and their intros, and for pushing us to think bigger on how we could
reach and help the community. The authors would like to thank Charles for his
deep dives on cost and LLMOps, as well as weaving the lessons to make them more
coherent and tighter—you have him to thank for this being 30 instead of 40
pages! The authors thank Hamel and Jason for their insights from advising
clients and being on the front lines, for their broad generalizable learnings
from clients, and for deep knowledge of tools. And finally, thank you Shreya for
reminding us of the importance of evals and rigorous production practices and
for bringing her research and original results.

Finally, we would like to thank all the teams who so generously shared your
challenges and lessons in your own write-ups which we’ve referenced throughout
this series, along with the AI communities for your vibrant participation and
engagement with this group.


ABOUT THE AUTHORS

See the about page for more information on the authors.

If you found this useful, please cite this write-up as:

> Yan, Eugene, Bryan Bischof, Charles Frye, Hamel Husain, Jason Liu, and Shreya
> Shankar. 2024. ‘Applied LLMs - What We’ve Learned From A Year of Building with
> LLMs’. Applied LLMs. 8 June 2024. https://applied-llms.org/.

or

@article{AppliedLLMs2024,
  title = {What We've Learned From A Year of Building with LLMs},
  author = {Yan, Eugene and Bischof, Bryan and Frye, Charles and Husain, Hamel and Liu, Jason and Shankar, Shreya},
  journal = {Applied LLMs},
  year = {2024},
  month = {Jun},
  url = {https://applied-llms.org/}
}