techcommunity.microsoft.com Open in urlscan Pro
2a02:26f0:1700:793::207e  Public Scan

Submitted URL: https://emails.microsoft.com/dc/oaDWyLccU0JCn04G6pvIvxp7YxeUjFiUTHy78XnlBp_Uw7g9mVFw8pC-ANbLUaztlTqp5B0hHaAF9bU0tP1Rv9YdicbsP...
Effective URL: https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/creating-batch-endpoints-in-azure-ml/ba-p/3039821?WT.mc_id=data-5250...
Submission: On March 10 via api from SE — Scanned from DE

Form analysis 2 forms found in the DOM

Name: form_31c11ba48402d0POST https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.searchformv32.form.form

<form enctype="multipart/form-data" class="lia-form lia-form-inline SearchForm" action="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.searchformv32.form.form" method="post" id="form_31c11ba48402d0" name="form_31c11ba48402d0">
  <div class="t-invisible"><input
      value="blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq"
      name="t:ac" type="hidden"><input value="search/contributions/page" name="t:cp" type="hidden"><input
      value="-2xL5GN7IeLUufiHlLWePQfWPx8cPfJUjK5WKy2vq2goWMp-vOI2QI238sw4HCakQHby0z0Db4LBBbfqxfB_9bNan044l4gSDHcg8y3grsFCOOUujIChKUqfPg2fchMJyFrcXTdbJqKHwnxOcOqVaCYictegVBpEpRrWQ2cCExYE0q-9aprWAE7m14rUqHwlEOk6Uo97-9d9fyryj-PkBLm-2PEW4gLGKW-gU0kp6dUlwIUc9-95_o51H-hpzMmnLtO9Ao0F8dB6iIJTSO97Zy6ACNDnVHTUJszK1QgXh1xE2eodm_tCqoRkpPgGG_TanqRdKdpTrLA6nvn_1wpwHbICmUjXUCCkj3sH7leLB5ymEvfogT-gWuKOCOPUNt2mkbp6jz9B96z8eunpEKHwST8B6lxTLLEudPwVzwTxPzAuZp0tdxNo9xCBlou6uGteUguNp6u0dLJmBdGPKEHj13B9SwYGtu-xgqTiN7RF62TR-5rZXu1yf8_Pog6HsUVzFahdt28tc5vPyABmGLNKhid9UfRPsr2aJp2dkzwqHlw0xcdi9HySB7r4oSa608ETv2ukd-fd0MUsCLnWgxzMKL7Y74JS7yUq2VkXlN55KFeILIQZPsXiRZP1RYX4gQ3IN6WFhpoQV83K_-jHW6C8PZlbbzQV5HfqYc2wqoWxycnOacuUPqqKozABtIc_-95i81_qVSYaOGMuw0BL7eBEEU4olI1IOifqq5wyGp5N3bwhxBmyppeTn6MSzDUFpkWuFRzmJen2rctKoJ31wz_B2YwNS5Fcw4FLLPknwyKcsyQzk_wFzUUBgU7LNau9iSBTRcimiEwEoR1L6gaba6GoTWsKGkg0DKgffU0cEIv_407e9XkvGv-uPN0g9pBThbcpZZvMVOzWqYeu-OPjBmyLkPVItvjxnwaUloP2Tvs2vWqq0o5l5Z8X7wLLJioxW152Cta5rz-I3SSveoWtqLWgwxiqVe-e0E8HPYdJfwRHUZMJMYPM-G6l5s4aMLBkufy8lIlXNQhVYON5cMchZCHWWC-eO8RhnyXEiPBUvXPfVnVoqkYRHLEQ7tItmBcTMeWDg7gt_fLb0BKzGOSYR8SBxpvALtQ2xWBV_SthbL_TtnKhFpxKt0AopLuK6g5t2SqvX5NuRyfiIRg7ZUbp8nRpni2TlQMytlgNBCRs3HV2NFqkXnPIndJuoy0-PgsfpXIGMnb3ie3e1p6Uko__SrqDmOVu1Qka8SmzaHeQyjTDVC7bp2Pg5o1k6duEutBiFd32UBbyuDJaqAMeh7-OvaEv2Q.."
      name="lia-form-context" type="hidden"><input value="BlogArticlePage:blog-id/MachineLearningBlog/article-id/104:searchformv32.form:" name="liaFormContentKey" type="hidden"><input
      value="dzOv4c3RXyNwwkfp7lwVMGy6yiU=:H4sIAAAAAAAAALWRQUrDQBSGn4WuigiiN9DtRNRuVIQiKELVYHAtM+lrGk0ycWbSxI1H6QnES3Thzjt4ALeuXDiTRKlVMJG4CvO/8H/fm7l/gXa6D3sSqXBHlssjJXyWKJ9H0rKphzvFZMhFON7aJDJhoa/Kj/kbMyUFdLnwCI2pO0KiaIxSidsucbnAwGeEUYmkx3RIXXXoYzBYc1Al8frFtPO8+vjWgoU+dHI2D05piAqW+1d0TK2ARp7laKXI281iBYsF+KAANyDeqytuC+6ilI7pkVLTpg+D7eHr5KkFkMXpGZxUNQp1jU7L0JT/EMkbuAPQ91GOnHyUqzTOMqh2eg521VZ1zWotsfJ9icuN/wAWmxzDUdXiRKKYrZw7fyywZPLZJ2gOUVs54urLLcydP5VN/kflXxC58jtO8j6qQQQAAA=="
      name="t:formdata" type="hidden"></div>
  <div class="lia-inline-ajax-feedback">
    <div class="AjaxFeedback" id="feedback_31c11ba48402d0"></div>
  </div>
  <input value="OIVlGAjbaTeJWUPtO1JlEYQkB-mMOoh5ur1Nu182sco." name="lia-action-token" type="hidden">
  <input value="form_31c11ba48402d0" id="form_UIDform_31c11ba48402d0" name="form_UID" type="hidden">
  <input value="" id="form_instance_keyform_31c11ba48402d0" name="form_instance_key" type="hidden">
  <span class="lia-search-input-wrapper">
    <span class="lia-search-input-field">
      <span class="lia-button-wrapper lia-button-wrapper-secondary lia-button-wrapper-searchForm-action"><input value="searchForm" name="submitContextX" type="hidden"><input class="lia-button lia-button-secondary lia-button-searchForm-action"
          value="Search" id="submitContext_31c11ba48402d0" name="submitContext" type="submit"></span>
      <input placeholder="Search the community" aria-label="Search" title="Search" class="lia-form-type-text lia-autocomplete-input search-input lia-search-input-message" value="" id="messageSearchField_31c11ba48402d0_0" name="messageSearchField"
        type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a search word</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="tCjF1Jic3NRInx6FylkN4ixH7Hb0nlFsosneTQdRTKI." rel="nofollow" id="disableAutoComplete_31c11ba4ae69a5" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input placeholder="Search the community" aria-label="Search" title="Search" class="lia-form-type-text lia-autocomplete-input search-input lia-search-input-tkb-article lia-js-hidden" value="" id="messageSearchField_31c11ba48402d0_1"
        name="messageSearchField_0" type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a search word</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="6BHWrsMu5G5sSSQVHNupDGYciIQjUS1H6rjOnTt1-Tg." rel="nofollow" id="disableAutoComplete_31c11ba52c87fe" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input placeholder="Search all content" ng-non-bindable="" title="Enter a user name or rank" class="lia-form-type-text UserSearchField lia-search-input-user search-input lia-js-hidden lia-autocomplete-input"
        aria-label="Enter a user name or rank" value="" id="userSearchField_31c11ba48402d0" name="userSearchField" type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a user name or rank</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="TtobdXelyjIu5VH64DqSrGHHuy9odQhuUL2LP3v1Wh4." rel="nofollow" id="disableAutoComplete_31c11ba55711fc" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input title="Enter a search word" class="lia-form-type-text NoteSearchField lia-search-input-note search-input lia-js-hidden lia-autocomplete-input" aria-label="Enter a search word" value="" id="noteSearchField_31c11ba48402d0_0"
        name="noteSearchField" type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a search word</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="WSoCvfvLvp6tPSqDfWPGhQiemcBATez1zZAiTPenZvY." rel="nofollow" id="disableAutoComplete_31c11ba592f4fa" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input class="lia-as-search-action-id" name="as-search-action-id" type="hidden">
    </span>
  </span>
  <span class="lia-cancel-search" tabindex="0">cancel</span>
</form>

Name: formPOST https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.searchformv32.form.form

<form enctype="multipart/form-data" class="lia-form lia-form-inline SearchForm" action="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.searchformv32.form.form" method="post" id="form" name="form">
  <div class="t-invisible"><input
      value="blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq"
      name="t:ac" type="hidden"><input value="search/contributions/page" name="t:cp" type="hidden"><input
      value="Z2SPwaFFkYiSM3-NJxqoNLZAryg1R9BihYnp_T2lJKWbhYW5wMHN7kgEfDdZrdc19h8QHCyfaWJ6z8tSymo607-nmIlPKKpVRPBl_4TBp1shRjTfzos_eq3H6hNEoId-bmZ3jH-LSc343-h1lVi6BRK5tnZ243WIpmLp3QqA1S8ov5vV4-uTVqjW_sXi8OpGry4A30pyHWhhVQ3MCLamscg9sEG3u57cMJ4aFnA0si6iM0lv3XBq5JsL2lV6cxLA0U41R2sHxxVyxaDXNJVae1H1rDK9z_cP4IX3txZF6Ab4Q99LdqaZ-_lLzD9fSkeGNgi5iDIar70Q-Cssp0EQpDV28DWMPIbZcLiLrPauiE797WXSSjzauzDesgZH-O1c4PJG86_kj0Qpu2aGq7fb1b7wWMpzFU6pUTJGvxbVMscRpDlfhARmCk9QTM8_ZqOD3Bik26aPtnjaoW3_7CDwsDpBi-WKvBs3dcpIS_zgHWuV97hsQ82QePcv0ElSLNiCPvsIhwHtdxoNYCyFycsr5cKZ4H-4VKCD3Wo_SVTe2HY45fn6gBYcDu8s7cJhPRXzcy55sD0zo1flSjMsvgCUxig5WnETxx733uzL5z6xx7sngojRrS1wk8ZvFoIsQNO2ZoKGVzWhYvvODnJKCIelIq7XCQoFxV0BRvJxp4OnRuSE4jXc5rGO6AHO0o7SHIlAZ1iPKQehIcQnpcR9OrlarIhu7l5Q1yZwAMf7sTo8wcTr1TpiCydoAlwriZtnd3892fvCahhjykYfvUCOf--0RrIY2H1Fg3xqHBh-ljTgaJPjMWgcNA2R7hr2hk6x68WcpEJ6_7Euyea5XET-9mTjTE-P8gjx10LPM693xCiR6NlRbnwowkZZP0z0eBiOfjjcNPtBPVyM7QfnFf0Bn5nq0vYOEIOAyfcPChYgB2nrr4FfR3O9Q1BzoyYvpiGgTXFKxXPsI9YCmCmydkYEF4TCZ0vkpc71CF7jAhGxBEYw01EwH4vs4kwD4faqou7psgQ5hFigm5Muya1maGmI68zvM6cfcAHtU9Bt1MIX_jkcrtNUFR8nFrczdiJSBsVySy-IUoggcEYIw7IidNF5LuNNpbY-UJagBJRf92W8Azonf3K_IKqMVMhxJ53Piif84RWCGGuuqIlk04IgKA0D9HRBJFmYH2l8AOishdq0lgnjP1w."
      name="lia-form-context" type="hidden"><input value="BlogArticlePage:blog-id/MachineLearningBlog/article-id/104:searchformv32.form:" name="liaFormContentKey" type="hidden"><input
      value="dzOv4c3RXyNwwkfp7lwVMGy6yiU=:H4sIAAAAAAAAALWRQUrDQBSGn4WuigiiN9DtRNRuVIQiKELVYHAtM+lrGk0ycWbSxI1H6QnES3Thzjt4ALeuXDiTRKlVMJG4CvO/8H/fm7l/gXa6D3sSqXBHlssjJXyWKJ9H0rKphzvFZMhFON7aJDJhoa/Kj/kbMyUFdLnwCI2pO0KiaIxSidsucbnAwGeEUYmkx3RIXXXoYzBYc1Al8frFtPO8+vjWgoU+dHI2D05piAqW+1d0TK2ARp7laKXI281iBYsF+KAANyDeqytuC+6ilI7pkVLTpg+D7eHr5KkFkMXpGZxUNQp1jU7L0JT/EMkbuAPQ91GOnHyUqzTOMqh2eg521VZ1zWotsfJ9icuN/wAWmxzDUdXiRKKYrZw7fyywZPLZJ2gOUVs54urLLcydP5VN/kflXxC58jtO8j6qQQQAAA=="
      name="t:formdata" type="hidden"></div>
  <div class="lia-inline-ajax-feedback">
    <div class="AjaxFeedback" id="feedback"></div>
  </div>
  <input value="SDKIvK8R-Xefbm2X33etgILwkZvgdbh9gDC0k8LaBBQ." name="lia-action-token" type="hidden">
  <input value="form" id="form_UIDform" name="form_UID" type="hidden">
  <input value="" id="form_instance_keyform" name="form_instance_key" type="hidden">
  <span class="lia-search-input-wrapper">
    <span class="lia-search-input-field">
      <span class="lia-button-wrapper lia-button-wrapper-secondary lia-button-wrapper-searchForm-action"><input value="searchForm" name="submitContextX" type="hidden"><input class="lia-button lia-button-secondary lia-button-searchForm-action"
          value="Search" id="submitContext" name="submitContext" type="submit"></span>
      <input placeholder="Search the community" aria-label="Search" title="Search" class="lia-form-type-text lia-autocomplete-input search-input lia-search-input-message" value="" id="messageSearchField_0" name="messageSearchField" type="text"
        aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a search word</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="hBreT_tAcM9AZuMgifAWW7DNlF3jLHfYfmPwNN76ryQ." rel="nofollow" id="disableAutoComplete_31c11ba632943a" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input placeholder="Search the community" aria-label="Search" title="Search" class="lia-form-type-text lia-autocomplete-input search-input lia-search-input-tkb-article lia-js-hidden" value="" id="messageSearchField_1"
        name="messageSearchField_0" type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a search word</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="w6Vj0Xra7PBRIyYIpSDi2QAzOYlYD4t9qJlSVj_hFqw." rel="nofollow" id="disableAutoComplete_31c11ba676d020" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input placeholder="Search all content" ng-non-bindable="" title="Enter a user name or rank" class="lia-form-type-text UserSearchField lia-search-input-user search-input lia-js-hidden lia-autocomplete-input"
        aria-label="Enter a user name or rank" value="" id="userSearchField" name="userSearchField" type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a user name or rank</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="W3ixO0RmqBQLVw47HNHZKlecDnd3Uk5emYU3GXDmMaw." rel="nofollow" id="disableAutoComplete_31c11ba6a226d2" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input title="Enter a search word" class="lia-form-type-text NoteSearchField lia-search-input-note search-input lia-js-hidden lia-autocomplete-input" aria-label="Enter a search word" value="" id="noteSearchField_0" name="noteSearchField"
        type="text" aria-autocomplete="both" autocomplete="off">
      <div class="lia-autocomplete-container" style="display: none; position: absolute;">
        <div class="lia-autocomplete-header">Enter a search word</div>
        <div class="lia-autocomplete-content">
          <ul></ul>
        </div>
        <div class="lia-autocomplete-footer">
          <a class="lia-link-navigation lia-autocomplete-toggle-off lia-link-ticket-post-action lia-component-search-action-disable-auto-complete" data-lia-action-token="6eKuKBPwn-sd-PexNA_wKBi4d6bRuW_GXLFbMPxqq0Y." rel="nofollow" id="disableAutoComplete_31c11ba6da667c" href="https://techcommunity.microsoft.com/t5/blogs/v2/blogarticlepage.disableautocomplete:disableautocomplete?t:ac=blog-id/MachineLearningBlog/article-id/104/q-p/V1QubWNfaWQ6ZGF0YS01MjUwNS1ic3RvbGxuaXR6Ojpta3RfdG9rOk1UVTNMVWRSUlMwek9ESUFBQUdERWRzWjRtcUd4NlFYMm9OMXlsalp1V0d2S2RyR0NLVkFHeVBsOTJvX1hVbjBvTTQ0OGtmajhnMTgyQjlvdzI5eVRySkM2VXktM05ndVR6NHNNcVlraWZiYkRjdG1STm1MVU9HaXN6YngwVWJRVGpDUlVQb1NjTURq&amp;t:cp=action/contributions/searchactions">Turn off suggestions</a>
        </div>
      </div>
      <input class="lia-as-search-action-id" name="as-search-action-id" type="hidden">
    </span>
  </span>
  <span class="lia-cancel-search">cancel</span>
</form>

Text Content

Skip to main content

Microsoft

Tech Community

Home

Community Hubs

Community Hubs
 * Community Hubs Home
 * Products
 * Special Topics
 * Video Hub

Close


PRODUCTS (69)


SPECIAL TOPICS (42)


VIDEO HUB (843)


MOST ACTIVE HUBS

Microsoft Teams
Excel
Exchange
SharePoint
Windows
Office 365
Security, Compliance and Identity
Windows Server
Microsoft Edge Insider
Azure
Microsoft 365
Azure Databases
Fully managed intelligent database services.
Autonomous Systems
Create and optimise intelligence for industrial control systems.
Yammer
Connect and engage across your organization.


MOST ACTIVE HUBS

ITOps Talk
Education Sector
Microsoft Learn
Microsoft Localization
Microsoft 365 PnP
Healthcare and Life Sciences
Public Sector
Internet of Things (IoT)
Mixed Reality
Enabling Remote Work
Small and Medium Business
Humans of IT
Empowering technologists to achieve more by humanizing tech.
Green Tech
Raise awareness about sustainability in the tech sector
MVP Award Program
Find out more about the Microsoft MVP Award Program.


VIDEO HUB

Azure
Exchange
Microsoft 365
Microsoft 365 Business
Microsoft 365 Enterprise
Microsoft Edge
Microsoft Outlook
Microsoft Teams
Security
SharePoint
Windows
Browse All Community Hubs

Blogs

Blogs

Events

Events
 * Events Home
 * Microsoft Ignite
 * Microsoft Build
 * Community Events

Microsoft Learn

Microsoft Learn
 * Home
 * Community
 * Blog
 * Azure
 * Dynamics 365
 * Microsoft 365
 * Security, Compliance & Identity
 * Power Platform
 * Github
 * Teams
 * .NET


Lounge

Lounge
 * 803K Members
 * 6,828 Online
 * 2.2M Discussions

Search
Enter a search word

Turn off suggestions
Enter a search word

Turn off suggestions
Enter a user name or rank

Turn off suggestions
Enter a search word

Turn off suggestions
cancel
Turn on suggestions
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Sign In

Sign In




Enter a search word

Turn off suggestions
Enter a search word

Turn off suggestions
Enter a user name or rank

Turn off suggestions
Enter a search word

Turn off suggestions
cancel
Turn on suggestions
Showing results for 
Show  only  | Search instead for 
Did you mean: 



 * Home
 * Artificial Intelligence and Machine Learning
 * AI - Machine Learning Blog
 * Creating batch endpoints in Azure ML

 * Back to Blog
 * Newer Article
 * Older Article




CREATING BATCH ENDPOINTS IN AZURE ML

 * Subscribe to RSS Feed
 * 
 * Mark as New
 * Mark as Read
 * 
 * Bookmark
 * Subscribe
 * 
 * Email to a Friend
 * 
 * Printer Friendly Page
 * Report Inappropriate Content


By
Beatriz Stollnitz
Published Dec 16 2021 11:13 AM 2,019 Views
Skip to footer content
BeaStollnitz
Microsoft
‎Dec 16 2021 11:13 AM


CREATING BATCH ENDPOINTS IN AZURE ML

‎Dec 16 2021 11:13 AM


INTRODUCTION

Suppose you’ve trained a machine learning model to accomplish some task, and
you’d now like to provide that model’s inference capabilities as a service.
Maybe you’re writing an application of your own that will rely on this service,
or perhaps you want to make the service available to others. This is the purpose
of endpoints — they provide a simple web-based API for feeding data to your
model and getting back inference results.

Azure ML currently supports three types of endpoints: batch endpoints,
Kubernetes online endpoints, and managed online endpoints. I’m going to focus on
batch endpoints in this post, but let me start by explaining how the three types
differ.



Batch endpoints are designed to handle large requests, working asynchronously
and generating results that are held in blob storage. Because compute resources
are only provisioned when the job starts, the latency of the response is higher
than using online endpoints. However, that can result in substantially lower
costs. Online endpoints, on the other hand, are designed to quickly process
smaller requests and provide near-immediate responses. Compute resources are
provisioned at the time of deployment, and are always up and running, which
depending on your scenario may mean higher costs than batch endpoints. However,
you get real-time responses, which is criticial to many scenarios. If you want
to deploy an online endpoint, you have two options: Kubernetes online endpoints
allow you to manage your own compute resources using Kubernetes, while managed
online endpoints rely on Azure to manage compute resources, OS updates, scaling,
and security. For more information about the different endpoint types and which
one is right for you, check out the documentation.

If you’re interested in managed online endpoints, check out my previous post. In
this post, I’ll show you how to work with batch endpoints. We’ll start by
training and saving two machine learning models, one using PyTorch and another
using TensorFlow. We’ll then write scoring functions that load the models and
perform predictions based on user input. After that, we’ll explore how we can
create the batch endpoints on Azure, which will require the creation of several
resources in the cloud. And finally, we’ll see how we can invoke the endpoints.
The code for this project can be found on GitHub.

Throughout this post, I’ll assume you’re familiar with machine learning concepts
like training and prediction, but I won’t assume familiarity with Azure.


AZURE ML SETUP

Here’s how you can set up Azure ML to follow the steps in this post.

 * You need to have an Azure subscription. You can get a free subscription to
   try it out.
 * Create a resource group.
 * Create a new machine learning workspace by following the “Create the
   workspace” section of the documentation. Keep in mind that you’ll be creating
   a “machine learning workspace” Azure resource, not a “workspace” Azure
   resource, which is entirely different!
 * If you have access to GitHub Codespaces, click on the “Code” button in this
   GitHub repo, select the “Codespaces” tab, and then click on “New codespace”.
 * Alternatively, if you plan to use your local machine:
   * Install the Azure CLI by following the instructions in the documentation.
   * Install the ML extension to the Azure CLI by following the “Installation”
     section of the documentation.
 * On a terminal window, login to Azure by executing az login --use-device-code.
 * Set your default subscription by executing az account set -s
   "<YOUR_SUBSCRIPTION_NAME_OR_ID>". You can verify your default subscription by
   executing az account show, or by looking at ~/.azure/azureProfile.json.
 * Set your default resource group and workspace by executing az configure
   --defaults group="<YOUR_RESOURCE_GROUP>" workspace="<YOUR_WORKSPACE>". You
   can verify your defaults by executing az configure --list-defaults or by
   looking at ~/.azure/config.
 * You can now open the Azure Machine Learning studio, where you’ll be able to
   see and manage all the machine learning resources we’ll be creating.
 * Although not essential to run the code in this post, I highly recommend
   installing the Azure Machine Learning extension for VS Code.

You’re now ready to start working with Azure ML!


TRAINING AND SAVING THE MODELS, AND CREATING THEM ON AZURE

We’ll start by training two machine learning models to classify Fashion MNIST
images — one using PyTorch and another using Keras/TensorFlow. If you’d like to
explore the training code in detail, check out my previous posts on PyTorch,
Keras and TensorFlow. The code associated with this post already includes
pre-trained models, so you can just use them as-is. But if you’d like to
recreate them, you can set up your machine using the conda files provided and
run the training code, which is in
fashion-mnist/batch-endpoint/pytorch-src/train.py and
fashion-mnist/batch-endpoint/tf-src/train.py.

Here’s the code where we save the model:

FASHION-MNIST/BATCH-ENDPOINT/PYTORCH-SRC/TRAIN.PY

 

    torch.save(model.state_dict(), path)


 

FASHION-MNIST/BATCH-ENDPOINT/TF-SRC/TRAIN.PY

 

    model.save(MODEL_PATH)


 

Next, we need to create the models on Azure. There are many ways to create
resources on Azure. My preferred way is to use a separate YAML file for each
resource and a CLI command to kick-off the remote creation, so that’s what I’ll
show here. Below you can see the YAML files we’ll use in the creation of these
models.

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/MODEL-PYTORCH-BATCH-FASHION.YML

 

$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json
name: model-pytorch-batch-fashion
version: 1
local_path: "../pytorch-model/weights.pth"


 

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/MODEL-TF-BATCH-FASHION.YML

 

$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json
name: model-tf-batch-fashion
version: 1
local_path: "../tf-model/"


 

If you read my managed online endpoints post, you should already be familiar
with the YAML in these files. Please refer to that post for more details on the
contents of these files, as well as the best way to create them from scratch.

We’re now ready to create the models on Azure, which we can do with the
following CLI commands:

 

az ml model create -f fashion-mnist/batch-endpoint/cloud/model-pytorch-batch-fashion.yml
az ml model create -f fashion-mnist/batch-endpoint/cloud/model-tf-batch-fashion.yml


 

If you go to the Azure ML studio, and use the left navigation to go to the
“Models” page, you’ll see our newly created models listed there.

In order to deploy our Azure ML endpoints, we’ll use endpoint and deployment
YAML files to specify the details of the endpoint configurations. I’ll show bits
and pieces of these YAML files throughout the rest of this post as I present
each setting. Let’s start by taking a look at how the deployment YAML files
refer to the models we created on Azure:

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/DEPLOYMENT.YML

 

...
model: azureml:model-pytorch-batch-fashion:1
...


 

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-2/DEPLOYMENT.YML

 

...
model: azureml:model-tf-batch-fashion:1
...


 


CREATING THE SCORING FILES

When invoked, our endpoint will call a scoring file, which we need to provide.
Just like the scoring file for managed online endpoints, this scoring file needs
to follow a prescribed structure: it needs to contain an init() function and a
run(...) function that are called when the batch job starts to run, after the
endpoint is invoked. The init() function is only called once per instance, so
it’s a good place to add shared operations such as loading the model. The
run(...) function is called once per process and handles a single mini-batch.

First we’ll take a look at the init() function for the PyTorch model (you’ll
find similar TensorFlow code in the post’s project):

FASHION-MNIST/BATCH-ENDPOINT/PYTORCH-SRC/SCORE.PY

 

import argparse
import logging
import os

import torch
from PIL import Image
from torch import Tensor, nn
from torchvision import transforms

def init():
    global logger
    global model
    global device

    arg_parser = argparse.ArgumentParser(description='Argument parser.')
    arg_parser.add_argument('--logging_level', type=str, help='logging level')
    args, _ = arg_parser.parse_known_args()
    logger = logging.getLogger(__name__)
    logger.setLevel(args.logging_level.upper())

    logger.info('Init started')

    device = 'cuda' if torch.cuda.is_available() else 'cpu'
    logger.info('Device: %s', device)

    model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'weights.pth')
    model = NeuralNetwork().to(device)
    model.load_state_dict(torch.load(model_path, map_location=device))
    model.eval()

    logger.info('Init completed')


 

In our scenario, the main task of this function is to load the model. The
AZUREML_MODEL_DIR environment variable gives us the directory where the model is
located on Azure, which we use to construct the model’s path. Once we have the
model’s path, we use it to load the model.

Notice that logging is done differently from online endpoints. Here, we create
and configure a global logger variable, which we then use by calling
logger.info. You can see in the code above that, in addition to logging the
beginning and end of the function, I also log whether the code is running on GPU
or CPU.

Now let’s look at the run(...) function:

FASHION-MNIST/BATCH-ENDPOINT/PYTORCH-SRC/SCORE.PY

 

labels_map = {
    0: 'T-Shirt',
    1: 'Trouser',
    2: 'Pullover',
    3: 'Dress',
    4: 'Coat',
    5: 'Sandal',
    6: 'Shirt',
    7: 'Sneaker',
    8: 'Bag',
    9: 'Ankle Boot',
}

def predict(model: nn.Module, x: Tensor) -> torch.Tensor:
    with torch.no_grad():
        y_prime = model(x)
        probabilities = nn.functional.softmax(y_prime, dim=1)
        predicted_indices = probabilities.argmax(1)
    return predicted_indices

def run(mini_batch):
    logger.info('run(%s started: %s', mini_batch, {__file__})
    predicted_names = []
    transform = transforms.ToTensor()

    for image_path in mini_batch:
        image = Image.open(image_path)
        tensor = transform(image).to(device)
        predicted_index = predict(model, tensor).item()
        predicted_names.append(f'{image_path}: {labels_map[predicted_index]}')

    logger.info('Run completed')
    return predicted_names


 

In my blog post about managed online endpoints, the run(...) function receives a
JSON file as a parameter. Batch endpoints work a bit differently — here the
run(...) function receives a list of file paths for a mini-batch of data. The
data is specified when invoking the endpoint, and the mini-batch size is
specified in the deployment YAML file, as we’ll see soon. In this scenario,
we’ll invoke the endpoint by referring to the sample-request directory, which
contains several images of clothing items, and we’ll set the mini-batch size to
10. Therefore, the run(...) method receives file paths for 10 images within the
sample-request directory.

For each image in the mini-batch, we transform it into a PyTorch tensor, and
pass it as a parameter to our predict(...) function. We then append the
prediction to a predicted_names list, and return that list as the prediction
result.

Let’s now look at how we specify the location of the scoring file and the
mini-batch size in the deployment YAML files:

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/DEPLOYMENT.YML

 

...
code_configuration:
  code:
    local_path: ../../pytorch-src/
  scoring_script: score.py
mini_batch_size: 10
...


 

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-2/DEPLOYMENT.YML

 

...
code_configuration:
  code:
    local_path: ../../tf-src/
  scoring_script: score.py
mini_batch_size: 10
...


 


CREATING THE ENVIRONMENTS

An Azure Machine Learning environment specifies the runtime where we can run
training and prediction code on Azure, along with any additional configuration.
In my blog post about managed online endpoints, I present three different ways
to create the inference environment for an endpoint: prebuilt Docker images for
inference, base images, and user-managed environments. I also describe all the
options for adding additional packages available for curated environments and
base images.

Batch endpoints also support all three options for creating environments, but
they don’t support extending prebuilt Docker images with conda files. In this
post’s scenario, we need the Pillow package to read our images in the scoring
file, which none of the prebuilt Docker images available includes. Therefore, we
use base images and extend them with conda files that install Pillow as well as
other packages.

Let’s take a look at the conda files used to extend the base images:

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/SCORE-CONDA.YML

 

name: pytorch-batch-endpoint-score
channels:
  - pytorch
  - conda-forge
  - defaults
dependencies:
  - numpy=1.20
  - python=3.7
  - pytorch=1.7
  - pillow=8.3.1
  - torchvision=0.8.1
  - pip
  - pip:
    - azureml-defaults==1.32.0


 

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-2/SCORE-CONDA.YML

 

name: tf-batch-endpoint-score
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.7
  - pillow=8.3.1
  - pip
  - pip:
    - tensorflow==2.4
    - azureml-defaults==1.32.0


 

Notice that each of the conda files above includes the azureml-defaults package,
which is required for inference on Azure.

We can now create an environment using a base image and each of the conda files
above, which we do directly in the deployment YAML files:

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/DEPLOYMENT.YML

 

...
environment:
  conda_file: score-conda.yml
  image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
...


 

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-2/DEPLOYMENT.YML

 

...
environment:
  conda_file: score-conda.yml
  image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
...


 

Notice how I specify that I want the latest version available of that image by
using the “latest” tag. This is a super handy feature!


CREATING THE COMPUTE CLUSTER

Next, let’s create the compute cluster, where we specify the size of the virtual
machine we’ll use to run inference, and how many instances of that VM we want
running in the cluster.

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/CLUSTER-CPU.YML

 

$schema: https://azuremlschemas.azureedge.net/latest/compute.schema.json
name: cluster-cpu
type: amlcompute
size: Standard_DS3_v2
min_instances: 0
max_instances: 4


 

First we need to specify the name for the cluster — I decided on the descriptive
cluster-cpu name. Then we need to choose the compute type. Currently the only
compute type supported is amlcompute, so that’s what we specify.

Next we need to choose a VM size. You can see a full list of supported VM sizes
in the documentation. I decided to choose a Standard_DS3_v2 VM (a small VM
without a GPU) because our inferencing scenario is simple.

And last, I specify that I want a minimum of zero VM instances, and a maximum of
four. Depending on the work load at each moment, Azure will decide how many VMs
to run and it will distribute the work across the VMs appropriately.

We can now create our compute cluster:

 

az ml compute create -f fashion-mnist/batch-endpoint/cloud/cluster-cpu.yml


 

You can go to the Azure ML studio, use the left navigation to go to the
“Compute” page, click on “Compute clusters,” and see our newly created compute
cluster listed there.

We’re now ready to refer to our compute cluster from within the deployment YAML
files:

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/DEPLOYMENT.YML &
FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-2/DEPLOYMENT.YML

 

...
compute: azureml:cluster-cpu
...


 


CREATING THE ENDPOINTS

By now, you’ve seen almost every line of the YAML files used to create the
endpoints. Let’s take a look at the complete deployment and endpoint files for
endpoint-1 to see what else we’re missing.

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/ENDPOINT.YML

 

$schema: https://azuremlschemas.azureedge.net/latest/batchEndpoint.schema.json
name: endpoint-batch-fashion-1
auth_mode: aad_token


 

FASHION-MNIST/BATCH-ENDPOINT/CLOUD/ENDPOINT-1/DEPLOYMENT.YML

 

$schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
name: blue
endpoint_name: endpoint-batch-fashion-1
model: azureml:model-pytorch-batch-fashion:1
code_configuration:
  code:
    local_path: ../../pytorch-src/
  scoring_script: score.py
environment:
  conda_file: score-conda.yml
  image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest
compute: azureml:cluster-cpu
mini_batch_size: 10
output_file_name: predictions_pytorch.csv


 

The schema provides VS Code with the information it needs to make suggestions
and warn us of problems. You may have noticed that a schema is present in the
YAML file for every resource we’ve created so far. The Azure ML extension for VS
Code is useful for those situations when you want to create a resource but are
not sure which schema to use. If you have this extension installed, you can
click on the Azure icon in the left navigation of VS Code, select your
subscription and workspace, then click on the ”+” icon next to a resource type
to create a template YAML file for that resource.

You’ll need to specify a name for your endpoint — just make sure that you pick a
name that is unique within your resource group’s region. This means that you may
need to change the name of the endpoints I provide, which you can do by changing
the file directly or by specifying a new name in the endpoint creation command
(we’ll see this later). For batch endpoints, always specify the auth_mode to be
aad_token.

Keep in mind that unlike managed online endpoints, batch endpoints don’t support
blue-green deployment. In my managed online endpoints post we added two
deployments with traffic set for 90 and 10, to test a new version of our
deployment on 10% of the inference calls. In batch endpoints, we can also have
several deployment files with the same endpoint-name. However, only one
deployment can be set as default, and the default deployment gets 100% of the
traffic.

Let’s move on in the exploration of the deployment YAML file. The deployment
needs to have an endpoint-name and that name needs to match the name specified
in the endpoint YAML file. We’ve already explored in detail the model,
code_configuration, environment, compute and mini_batch_size sections. The
output_file_name is self-explanatory — it’s the name of the file that will
contain all the predictions for our inputs. I’ll show you later where to find
it.

The second endpoint is very similar to this one. The only difference is that it
points to the TensorFlow scoring code. Now that you understand the endpoint
configuration YAML files in detail, you’re ready to create the endpoints:

 

az ml batch-endpoint create -f fashion-mnist/batch-endpoint/cloud/endpoint-1/endpoint.yml --name <ENDPOINT1>
az ml batch-deployment create -f fashion-mnist/batch-endpoint/cloud/endpoint-1/deployment.yml --set-default --endpoint-name <ENDPOINT1>
az ml batch-endpoint create -f fashion-mnist/batch-endpoint/cloud/endpoint-2/endpoint.yml --name <ENDPOINT2>
az ml batch-deployment create -f fashion-mnist/batch-endpoint/cloud/endpoint-2/deployment.yml --set-default --endpoint-name <ENDPOINT2>


 

If you didn’t specify a unique name in the YAML files, you can do that in the
CLI command by replacing <ENDPOINT1> and <ENDPOINT2> with your unique names.
Also, notice how we set the deployment as default in the CLI, at the time of its
creation.

You can now go to the Azure ML studio to see your endpoints in the UI. Click on
“Endpoints” in the left navigation, then “Batch endpoints” in the top
navigation, and you’ll see them listed, as you can see in the image below:




CREATING THE REQUEST FILES

Next we’ll explore our request files — the list of files we’ll specify when
invoking the endpoint, which will then be passed to the run(...) function of the
scoring file for inference. If you look at the accompanying project on GitHub,
you’ll see a directory called sample-request containing several images of size
28 × 28 pixels, representing clothing items. When invoking the endpoint, we’ll
provide the path to this directory.

I decided to include the sample-request directory in the git repo for
simplicity. If you want to recreate it, you’ll first need to create the conda
environment specified in conda-pytorch.yml (if you haven’t already), then
activate it, and finally run the code in the
fashion-mnist/batch-endpoint/pytorch-src/create_sample_request.py file.

FASHION-MNIST/BATCH-ENDPOINT/PYTORCH-SRC/CREATE_SAMPLE_REQUEST.PY

 

from torchvision import datasets
import os

DATA_PATH = 'fashion-mnist/batch-endpoint/data'
SAMPLE_REQUEST = 'fashion-mnist/batch-endpoint/sample-request'


def main() -> None:
    """Creates a sample request to be used in prediction."""

    test_data = datasets.FashionMNIST(
        root=DATA_PATH,
        train=False,
        download=True,
    )

    os.makedirs(name=SAMPLE_REQUEST, exist_ok=True)
    for i, (image, _) in enumerate(test_data):
        if i == 200:
            break
        image.save(f'{SAMPLE_REQUEST}/{i+1:0>3}.png')


if __name__ == '__main__':
    main()


 


INVOKING THE ENDPOINTS USING CLI

Now that you have the endpoint YAML files and a directory with sample requests,
you can invoke the endpoints using the following commands:

 

az ml batch-endpoint invoke --name <ENDPOINT1> --input-local-path fashion-mnist/batch-endpoint/sample-request
az ml batch-endpoint invoke --name <ENDPOINT2> --input-local-path fashion-mnist/batch-endpoint/sample-request


 

Make sure you use the names you chose in the creation of your endpoints.

Unlike with managed online endpoints, the invocation call will not immediately
return the result of your predictions — instead, it kicks off an asynchronous
inference run that will produce predictions at a later time. Let’s go to the
Azure ML studio and see what’s going on. Click on “Endpoints” in the left
navigation, then “Batch endpoints,” and then on the name of one of your
endpoints. You’ll be led to a page with two tabs: “Details,” which shows the
information you specified in the endpoint’s YAML file, and “Runs,” where we can
see the status of asynchronous inference runs associated with the endpoint.
Let’s click on “Runs.” You’ll see all the runs that were kicked off by the
invoke command, with a status that may be “Running,” “Completed,” or “Failed.”



Now let’s click on the “Display name” of the latest “Completed” run. (If your
run is still running, feel free to click on it anyway to see the logs coming in
in real-time.) This will take you to a diagram that includes a “score” section
in green, with the word “Completed.”



Next, right-click on the score section and choose “View log” to see the logs for
this run.



This will take you to the following page, which shows all the logs for the run.
You can read more about what each log means in the documentation. When a run
completes successfully, I’m mostly interested in looking at the logs I added in
the init() and run(...) functions of the scoring file. Those can be found under
logs/user/stdout/10.0.0.6/process.stdout.txt. As you can see below, the logs in
the init() function appear once, and the logs in the run(...) function appear as
many times as the number of mini-batches in the sample request. Here’s what I
see after a successful run:



I encourage you to spend some time getting familiar with the structure of the
logs.

Once a run completes successfully, you’ll want to look at the results of the
prediction, which are stored in blob storage. You can access these by going back
to your run diagram, and right-clicking on the little circle below the
“Completed” section. Then choose “Access data” from the context menu.



This takes you to a blob storage location where you can see a file with the name
you specified in the endpoint YAML file, which in our scenario is either
predictions_tf.csv or predictions_pytorch.csv. Right-click on the filename (or
click the triple-dot icon to its right) to show a menu of options, including an
option to “View/edit” and another to “Download.” These CSV files contain one
prediction per line, as we can see below:



Each clothing item in this file corresponds to the prediction for one of the
images in the sample request. This is a great achievement — we got our
predictions!


INVOKING THE ENDPOINTS USING REST

Alternatively, you can invoke your endpoints using a curl POST command. In this
case, we first need to create a dataset with the input data, which we then pass
as a parameter to the curl command. Our input data is in the sample-request
folder, therefore here’s what the YAML file for our dataset looks like:

BATCH-ENDPOINT/CLOUD/DATASET-INPUT-BATCH-FASHION.YML

 

$schema: https://azuremlschemas.azureedge.net/latest/dataset.schema.json
name: dataset-input-batch-fashion
local_path: ../sample-request/


 

We can now create the dataset with the following CLI command:

 

az ml dataset create -f batch-endpoint/cloud/dataset-input-batch-fashion.yml


 

If you go to the Azure ML studio and click on “Datasets” in the left navigation,
you’ll see your newly created dataset there.

Now let’s look at what the REST call looks like. The
fashion-mnist/batch-endpoint/rest/invoke.sh file contains all the commands you
need to invoke the batch endpoint using REST. You can reuse this file for any of
your projects, by simply replacing the ENDPOINT_NAME, DATASET_NAME. and
DATASET_VERSION with the appropriate information.

FASHION-MNIST/BATCH-ENDPOINT/REST/INVOKE.SH

 

ENDPOINT_NAME=endpoint-batch-fashion-1
DATASET_NAME=dataset-input-batch-fashion
DATASET_VERSION=1

SUBSCRIPTION_ID=$(az account show --query id | tr -d '\r"')
echo "SUBSCRIPTION_ID: $SUBSCRIPTION_ID"

RESOURCE_GROUP=$(az group show --query name | tr -d '\r"')
echo "RESOURCE_GROUP: $RESOURCE_GROUP"

WORKSPACE=$(az configure -l | jq -r '.[] | select(.name=="workspace") | .value')
echo "WORKSPACE: $WORKSPACE"

SCORING_URI=$(az ml batch-endpoint show --name $ENDPOINT_NAME --query scoring_uri -o tsv)
echo "SCORING_URI: $SCORING_URI"

SCORING_TOKEN=$(az account get-access-token --resource https://ml.azure.com --query accessToken -o tsv)
echo "SCORING_TOKEN: $SCORING_TOKEN"

curl --location --request POST $SCORING_URI \
--header "Authorization: Bearer $SCORING_TOKEN" \
--header "Content-Type: application/json" \
--data-raw "{
    \"properties\": {
        \"dataset\": {
            \"dataInputType\": \"DatasetVersion\",
            \"datasetName\": \"$DATASET_NAME\",
            \"datasetVersion\": \"$DATASET_VERSION\"
        },
        \"outputDataset\": {
            \"datastoreId\": \"/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/datastores/workspaceblobstore\",
            \"path\": \"$ENDPOINT_NAME\"
        },
    }
}"


 

Notice that before we execute the curl command, we query for a scoring token,
which we then use as the bearer token in our POST call.

Now you can simple run this script to invoke the batch endpoint:

 

fashion-mnist/batch-endpoint/rest/invoke.sh


 

Invoking the endpoint this way triggers the same sequence of events in Azure
portal that we’ve already covered in the previous section.


CONCLUSION

In this post, you learned how to create a batch endpoint on Azure ML. You
learned how to write a scoring file, and how to create model and cluster
resources on Azure ML. Then you learned how to use those resources to create the
endpoint itself, and how to invoke it by giving it a directory of image
resources. And finally, you learned to look at the logs and at the file
containing the predictions. Congratulations on acquiring a new skill!

The project associated with this post can be found on GitHub.

Thank you to Tracy Chen from Microsoft for reviewing the content in this post.


ABOUT THE AUTHOR

Bea Stollnitz is a principal developer advocate at Microsoft, focusing on Azure
ML. See her blog for more in-depth articles about Azure ML and other machine
learning topics.

0 Likes
Like





You must be a registered user to add a comment. If you've already registered,
sign in. Otherwise, register and sign in.

 * Comment

Co-Authors
BeaStollnitz

Version history
Last update:
‎Jan 29 2022 05:07 PM
Updated by:
BeaStollnitz


Labels
 * Azure Machine Learning 65
 * Machine Learning 53




SHARE

 * Share to LinkedIn
 * Share to Facebook
 * Share to Twitter
 * Share to Email




Browse

Skip to primary navigation

WHAT'S NEW

 * Surface Pro X
 * Surface Laptop 3
 * Surface Pro 7
 * Windows 10 Apps
 * Office apps

MICROSOFT STORE

 * Account profile
 * Download Center
 * Microsoft Store support
 * Returns
 * Order tracking
 * Store locations
 * Buy online, pick up in store
 * In-store events

EDUCATION

 * Microsoft in education
 * Office for students
 * Office for schools
 * Deals for students and parents
 * Microsoft Azure in education

ENTERPRISE

 * Azure
 * AppSource
 * Automotive
 * Government
 * Healthcare
 * Manufacturing
 * Financial Services
 * Retail

DEVELOPER

 * Microsoft Visual Studio
 * Window Dev Center
 * Developer Network
 * TechNet
 * Microsoft developer program
 * Channel 9
 * Office Dev Center
 * Microsoft Garage

COMPANY

 * Careers
 * About Microsoft
 * Company News
 * Privacy at Microsoft
 * Investors
 * Diversity and inclusion
 * Accessibility
 * Security

 * Sitemap
 * Contact Microsoft
 * Privacy
 * Manage cookies
 * Terms of use
 * Trademarks
 * Safety and eco
 * About our ads
 * © 2022 Microsoft

Auto-suggest helps you quickly narrow down your search results by suggesting
possible matches as you type.

Auto-suggest helps you quickly narrow down your search results by suggesting
possible matches as you type.