How to calculate evaluation metrics on training data in TensorFlow's Object Detection API?












0















I am using the object detector api for quite a while now so training models and use them for inference is all good. Unfortunately, when using TensorBoard to visualize metrics (such as mAP, AR, classification/localization loss) we only get to see those metrics on the validation set. I'd like to calculate the aforementioned metrics also during training so that we can compare train/validation metrics on Tensorboard.



edit: I've stumbled on this post which addresses the same concern how to check both training/eval performances in tensorflow object_detection



Anyone got a pointer on how to achieve this?










share|improve this question





























    0















    I am using the object detector api for quite a while now so training models and use them for inference is all good. Unfortunately, when using TensorBoard to visualize metrics (such as mAP, AR, classification/localization loss) we only get to see those metrics on the validation set. I'd like to calculate the aforementioned metrics also during training so that we can compare train/validation metrics on Tensorboard.



    edit: I've stumbled on this post which addresses the same concern how to check both training/eval performances in tensorflow object_detection



    Anyone got a pointer on how to achieve this?










    share|improve this question



























      0












      0








      0








      I am using the object detector api for quite a while now so training models and use them for inference is all good. Unfortunately, when using TensorBoard to visualize metrics (such as mAP, AR, classification/localization loss) we only get to see those metrics on the validation set. I'd like to calculate the aforementioned metrics also during training so that we can compare train/validation metrics on Tensorboard.



      edit: I've stumbled on this post which addresses the same concern how to check both training/eval performances in tensorflow object_detection



      Anyone got a pointer on how to achieve this?










      share|improve this question
















      I am using the object detector api for quite a while now so training models and use them for inference is all good. Unfortunately, when using TensorBoard to visualize metrics (such as mAP, AR, classification/localization loss) we only get to see those metrics on the validation set. I'd like to calculate the aforementioned metrics also during training so that we can compare train/validation metrics on Tensorboard.



      edit: I've stumbled on this post which addresses the same concern how to check both training/eval performances in tensorflow object_detection



      Anyone got a pointer on how to achieve this?







      python-3.x tensorflow tensorboard object-detection-api






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 19 at 13:23







      Kishintai

















      asked Jan 18 at 20:20









      KishintaiKishintai

      335




      335
























          1 Answer
          1






          active

          oldest

          votes


















          0














          You can evaluate your model on the training data by adding the arguments --eval_training_data=True --sample_1_of_n_eval_on_train_examples=10 to the arguments of model_main.
          By doing so, you instruct it to perform the evaluation on the training data, and you choose how much to dilute the training data sent to evaluation, since usually the amount of training data is very large.
          The thing is that I don't think it's currently possible to evaluate both on training on validation data, but I don't think it's too bad, since usually evaluation on training data is only for sanity check, and not for actual continuous evaluation the model.






          share|improve this answer
























          • Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

            – Kishintai
            Jan 23 at 16:00











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54260896%2fhow-to-calculate-evaluation-metrics-on-training-data-in-tensorflows-object-dete%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          You can evaluate your model on the training data by adding the arguments --eval_training_data=True --sample_1_of_n_eval_on_train_examples=10 to the arguments of model_main.
          By doing so, you instruct it to perform the evaluation on the training data, and you choose how much to dilute the training data sent to evaluation, since usually the amount of training data is very large.
          The thing is that I don't think it's currently possible to evaluate both on training on validation data, but I don't think it's too bad, since usually evaluation on training data is only for sanity check, and not for actual continuous evaluation the model.






          share|improve this answer
























          • Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

            – Kishintai
            Jan 23 at 16:00
















          0














          You can evaluate your model on the training data by adding the arguments --eval_training_data=True --sample_1_of_n_eval_on_train_examples=10 to the arguments of model_main.
          By doing so, you instruct it to perform the evaluation on the training data, and you choose how much to dilute the training data sent to evaluation, since usually the amount of training data is very large.
          The thing is that I don't think it's currently possible to evaluate both on training on validation data, but I don't think it's too bad, since usually evaluation on training data is only for sanity check, and not for actual continuous evaluation the model.






          share|improve this answer
























          • Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

            – Kishintai
            Jan 23 at 16:00














          0












          0








          0







          You can evaluate your model on the training data by adding the arguments --eval_training_data=True --sample_1_of_n_eval_on_train_examples=10 to the arguments of model_main.
          By doing so, you instruct it to perform the evaluation on the training data, and you choose how much to dilute the training data sent to evaluation, since usually the amount of training data is very large.
          The thing is that I don't think it's currently possible to evaluate both on training on validation data, but I don't think it's too bad, since usually evaluation on training data is only for sanity check, and not for actual continuous evaluation the model.






          share|improve this answer













          You can evaluate your model on the training data by adding the arguments --eval_training_data=True --sample_1_of_n_eval_on_train_examples=10 to the arguments of model_main.
          By doing so, you instruct it to perform the evaluation on the training data, and you choose how much to dilute the training data sent to evaluation, since usually the amount of training data is very large.
          The thing is that I don't think it's currently possible to evaluate both on training on validation data, but I don't think it's too bad, since usually evaluation on training data is only for sanity check, and not for actual continuous evaluation the model.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 23 at 14:36









          netanel-samnetanel-sam

          30419




          30419













          • Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

            – Kishintai
            Jan 23 at 16:00



















          • Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

            – Kishintai
            Jan 23 at 16:00

















          Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

          – Kishintai
          Jan 23 at 16:00





          Thanks for the clarification! About the flag eval_on_training_data it is stated that it is only used in eval-only mode and a chackpoint_dir must be provided. Is the chackpoint_dir the same as the model_dir? Also, would evaluation on training data not give clues about an overfitted object detector?

          – Kishintai
          Jan 23 at 16:00


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54260896%2fhow-to-calculate-evaluation-metrics-on-training-data-in-tensorflows-object-dete%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Homophylophilia

          Updating UILabel text programmatically using a function

          Cloud Functions - OpenCV Videocapture Read method fails for larger files from cloud storage