batchnormal.mlperf.pw Open in urlscan Pro
23.27.124.238  Public Scan

URL: https://batchnormal.mlperf.pw/
Submission: On May 27 via api from US — Scanned from DE

Form analysis 2 forms found in the DOM

POST https://mlcommons.org/#gf_1

<form method="post" enctype="multipart/form-data" target="gform_ajax_frame_1" id="gform_1" action="https://mlcommons.org/#gf_1" data-formid="1" novalidate="">
  <div class="gform-body gform_body">
    <div id="gform_fields_1" class="gform_fields top_label form_sublabel_below description_below">
      <div id="field_1_1" class="gfield gfield--type-email gfield--width-full gfield_contains_required field_sublabel_below gfield--no-description field_description_below gfield_visibility_visible" data-js-reload="field_1_1">
        <label class="gfield_label gform-field-label" for="input_1_1">Email<span class="gfield_required"><span class="gfield_required gfield_required_text">(Required)</span></span></label>
        <div class="ginput_container ginput_container_email">
          <input name="input_1" id="input_1_1" type="email" value="" class="large" placeholder="Email" aria-required="true" aria-invalid="false">
        </div>
      </div>
      <fieldset id="field_1_3"
        class="gfield gfield--type-consent gfield--type-choice gfield--input-type-consent gfield--width-full gfield_contains_required field_sublabel_below gfield--has-description field_description_below gfield_visibility_visible"
        data-js-reload="field_1_3">
        <legend class="gfield_label gform-field-label gfield_label_before_complex">Consent<span class="gfield_required"><span class="gfield_required gfield_required_text">(Required)</span></span>
        </legend>
        <div class="ginput_container ginput_container_consent">
          <input name="input_3.1" id="input_1_3_1" type="checkbox" value="1" aria-describedby="gfield_consent_description_1_3" aria-required="true" aria-invalid="false"> <label
            class="gform-field-label gform-field-label--type-inline gfield_consent_label" for="input_1_3_1">I agree to the privacy policy.</label><input type="hidden" name="input_3.2" value="I agree to the privacy policy." class="gform_hidden"><input
            type="hidden" name="input_3.3" value="3" class="gform_hidden">
        </div>
        <div class="gfield_description gfield_consent_description" id="gfield_consent_description_1_3">By submiting this form I agree with the
          <a href="privacy-policy/index.html" data-cmp-ab="2" target="_blank" rel="noreferrer noopener">Privacy Policy</a>
        </div>
      </fieldset>
      <div id="field_1_7" class="gfield gfield--type-hidden gfield--width-full gform_hidden field_sublabel_below gfield--no-description field_description_below gfield_visibility_visible" data-js-reload="field_1_7">
        <div class="ginput_container ginput_container_text"><input name="input_7" id="input_1_7" type="hidden" class="gform_hidden" aria-invalid="false" value="Subscriber"></div>
      </div>
      <div id="field_1_5" class="gfield gfield--type-captcha gfield--width-full field_sublabel_below gfield--no-description field_description_below hidden_label gfield_visibility_visible" data-js-reload="field_1_5">
        <label class="gfield_label gform-field-label" for="input_1_5">CAPTCHA</label>
        <div id="input_1_5" class="ginput_container ginput_recaptcha gform-initialized" data-sitekey="6LeD_q0oAAAAAK7n9xn-WHUlroKVJc4TDnuvOCN9" data-theme="light" data-tabindex="-1" data-size="invisible" data-badge="bottomright">
          <div class="grecaptcha-badge" data-style="bottomright"
            style="width: 256px; height: 60px; position: fixed; visibility: hidden; display: block; transition: right 0.3s ease 0s; bottom: 14px; right: -186px; box-shadow: gray 0px 0px 5px; border-radius: 2px; overflow: hidden;">
            <div class="grecaptcha-logo"><iframe title="reCAPTCHA" width="256" height="60" role="presentation" name="a-htrztz84nnwc" frameborder="0" scrolling="no"
                sandbox="allow-forms allow-popups allow-same-origin allow-scripts allow-top-navigation allow-modals allow-popups-to-escape-sandbox allow-storage-access-by-user-activation"
                src="https://www.google.com/recaptcha/api2/anchor?ar=1&amp;k=6LeD_q0oAAAAAK7n9xn-WHUlroKVJc4TDnuvOCN9&amp;co=aHR0cHM6Ly9iYXRjaG5vcm1hbC5tbHBlcmYucHc6NDQz&amp;hl=en&amp;v=joHA60MeME-PNviL59xVH9zs&amp;theme=light&amp;size=invisible&amp;badge=bottomright&amp;cb=y9rjk0hcro28"
                tabindex="-1" data-cmp-info="8"></iframe></div>
            <div class="grecaptcha-error"></div><textarea id="g-recaptcha-response" name="g-recaptcha-response" class="g-recaptcha-response"
              style="width: 250px; height: 40px; border: 1px solid rgb(193, 193, 193); margin: 10px 25px; padding: 0px; resize: none; display: none;"></textarea>
          </div><iframe data-cmp-info="7" style="display: none;"></iframe>
        </div>
      </div>
    </div>
  </div>
  <div class="gform_footer before"> <input type="submit" id="gform_submit_button_1" class="gform_button button" value="Submit"
      onclick="if(window[&quot;gf_submitting_1&quot;]){return false;}  if( !jQuery(&quot;#gform_1&quot;)[0].checkValidity || jQuery(&quot;#gform_1&quot;)[0].checkValidity()){window[&quot;gf_submitting_1&quot;]=true;}  "
      onkeypress="if( event.keyCode == 13 ){ if(window[&quot;gf_submitting_1&quot;]){return false;} if( !jQuery(&quot;#gform_1&quot;)[0].checkValidity || jQuery(&quot;#gform_1&quot;)[0].checkValidity()){window[&quot;gf_submitting_1&quot;]=true;}  jQuery(&quot;#gform_1&quot;).trigger(&quot;submit&quot;,[true]); }">
    <input type="hidden" name="gform_ajax" value="form_id=1&amp;title=&amp;description=&amp;tabindex=0&amp;theme=orbital">
    <input type="hidden" class="gform_hidden" name="is_submit_1" value="1">
    <input type="hidden" class="gform_hidden" name="gform_submit" value="1">
    <input type="hidden" class="gform_hidden" name="gform_unique_id" value="">
    <input type="hidden" class="gform_hidden" name="state_1"
      value="WyJ7XCIzLjFcIjpcIjkwMTUwOGE3ZWM2MGJlOWUzOWY1ZjEzNjkyYzZiZWFmXCIsXCIzLjJcIjpcIjIyNDk4ZTkxODUyMzA5YjY1ZTZiNjZjZTc1YmVhNzQyXCIsXCIzLjNcIjpcIjFkZWIzOTJiM2EzNTJmNmM0ZjZhMjQ1OGQxZTQ4MGE5XCJ9IiwiNDE5ZDk2YzQ3YTc4NjhjZTFjY2RlNzhlNzIwM2MyYzkiXQ==">
    <input type="hidden" class="gform_hidden" name="gform_target_page_number_1" id="gform_target_page_number_1" value="0">
    <input type="hidden" class="gform_hidden" name="gform_source_page_number_1" id="gform_source_page_number_1" value="1">
    <input type="hidden" name="gform_field_values" value="">
  </div>
</form>

GET https://mlcommons.org/

<form class="search-form" method="get" action="https://mlcommons.org/" role="search">
  <label class="search-form-label screen-reader-text" for="searchform-1">Search this website</label><input class="search-form-input" type="search" name="s" id="searchform-1" placeholder="Search this website"><input class="search-form-submit"
    type="submit" value="Search">
  <meta content="https://mlcommons.org/?s={s}">
</form>

Text Content

 * Skip to primary navigation
 * Skip to main content

MLCommons

Better Machine Learning for Everyone

Menu
 * Benchmarks Submenu
   * Benchmarks
   * AI Safety Benchmarks
   * MLPerf Training
   * MLPerf Training: HPC
   * MLPerf Inference: Datacenter
   * MLPerf Inference: Edge
   * MLPerf Inference: Mobile
   * MLPerf Inference: Tiny
   * MLPerf Storage
 * Datasets Submenu
   * Datasets
   * Cognata Dataset
   * Dollar Street
   * Multilingual Spoken Words
   * People’s Speech
 * Working Groups Submenu
   * Working Groups
   * Benchmarks
   * AI Safety
   * Data
   * Research
 * AI Safety
 * Research
 * About Us Submenu
   * About Us
   * Leadership
   * Programs
   * Policies
 * Blog
 * Join Us
 * Search


BETTER AI FOR EVERYONE

Building trusted, safe, and efficient AI requires better systems for measurement
and accountability. MLCommons’ collective engineering with industry and academia
continually measures and improves the accuracy, safety, speed, and efficiency of
AI technologies.

Get involved


125+

MLCommons Members and Affiliates

6

Benchmark Suites

55,000+

MLPerf Performance Results to-date


ACCELERATING ARTIFICIAL INTELLIGENCE INNOVATION

In collaboration with our 125+ founding members and affiliates, including
startups, leading companies, academics, and non-profits from around the globe,
we democratize AI through open industry-standard benchmarks that measure quality
and performance and by building open, large-scale, and diverse datasets to
improve AI models. 

About us


FOCUS AREAS

We help to advance new technologies by democratizing AI adoption through the
creation and management of open useful measures of quality and performance,
large scale open data sets, and ongoing research efforts.  


BENCHMARKING 

Benchmarks help balance the benefits and risks of AI through quantitative tools
that guide effective and responsible AI development. They provide consistent
measurements of accuracy, safety, speed, and efficiency which enable engineers
to design reliable products and services and help researchers gain new insights
to drive the solutions of tomorrow. 

Learn more


DATASETS 

Evaluating AI systems depends on rigorous, standardized test datasets. MLCommons
builds open, large-scale, and diverse datasets and a rich ecosystem of
techniques and tools for AI data, helping the broader community deliver more
accurate and safer AI systems. 

Learn more


RESEARCH

Open collaboration and support with the research community helps accelerate and
democratize scientific discovery. MLCommons shared research infrastructure for
benchmarking, rich datasets and diverse community, help enable the scientific
research community to derive new insights for new breakthroughs in AI, for the
betterment of society. 

Learn more


MEMBERS

MLCommons is supported by our 125+ founding members and affiliates, including
startups, leading companies, academics, and non-profits from around the globe.


MEMBERS




FOUNDING MEMBERS




MEMBERS




FOUNDING MEMBERS




MEMBERS




JOIN OUR COMMUNITY

MLCommons is a community-driven and community-funded effort. We welcome all
corporations, academic researchers, nonprofits, government organizations, and
individuals on a non-discriminatory basis. Join us!

Get involved



FEATURED ARTICLES

April 17, 2024
News


MLPERF TINY V1.2 RESULTS

MLPerf Tiny results demonstrate an increased industry adoption of AI through
software support


April 16, 2024
News


ANNOUNCING MLCOMMONS AI SAFETY V0.5 PROOF OF CONCEPT

Achieving a major milestone towards standard benchmarks for evaluating AI Safety


April 16, 2024
Blog


THE AI SAFETY ECOSYSTEM NEEDS STANDARD BENCHMARKS 

IEEE Spectrum contributed blog excerpt, authored by the MLCommons AI Safety
working group


March 27, 2024
News


NEW MLPERF INFERENCE BENCHMARK RESULTS HIGHLIGHT THE RAPID GROWTH OF GENERATIVE
AI MODELS

With 70 billion parameters, Llama 2 70B is the largest model added to the MLPerf
Inference benchmark suite.


March 27, 2024
Blog


LLAMA 2 70B: AN MLPERF INFERENCE BENCHMARK FOR LARGE LANGUAGE MODELS

MLPerf Inference task force shares insights on the selection of Llama 2 for the
latest MLPerf Inference benchmark round.


March 10, 2023
Blog


PERSPECTIVE: UNLOCKING ML REQUIRES AN ECOSYSTEM APPROACH

Factories need good roads to deliver value


See all blogs and news


SUBSCRIBE FOR THE LATEST NEWS

Email(Required)

Consent(Required)
I agree to the privacy policy.
By submiting this form I agree with the Privacy Policy

CAPTCHA


 * Legal
 * Policies
 * Privacy Policy

 * Quick Links
 * Calendar
 * Discord
 * Github

 * Contact
 * support@mlcommons.org

 * Follow Us



© 2020–2024 MLCommons

Search this website

˄
Close


Notifications