Python, Keras - ValueError: Cannot feed value of shape (10, 70, 1025) for Tensor u'dense_2_target:0', which...












0















I am trying to train a RNN by batches.
The input input size
(10, 70, 3075),
where 10 is the batch size, 70 the time dimension, 3075 are the frequency dimension.



There are three outputs whose size is
(10, 70, 1025)
each, basically 10 spectrograms with size (70,1025).



I would like to train this RNN by regression, whose structure is



input_img = Input(shape=(70,3075 ) )
x = Bidirectional(LSTM(n_hid,return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(input_img)
x = Dropout(0.2)(x)
x = Bidirectional(LSTM(n_hid, dropout=0.5, recurrent_dropout=0.2))(x)
x = Dropout(0.2)(x)
o0 = ( Dense(1025, activation='sigmoid'))(x)
o1 = ( Dense(1025, activation='sigmoid'))(x)
o2 = ( Dense(1025, activation='sigmoid'))(x)


The problem is that output dense layers cannot take into account three dimensions, they want something like (None, 1025), which I don't know how to provide, unless I concatenate along the time dimension.



The following error occurs:




ValueError: Cannot feed value of shape (10, 70, 1025) for Tensor u'dense_2_target:0', which has shape '(?, ?)'




Would be the batch_shape option useful in the input layer? I have actually tried it, but I've got the same error.










share|improve this question



























    0















    I am trying to train a RNN by batches.
    The input input size
    (10, 70, 3075),
    where 10 is the batch size, 70 the time dimension, 3075 are the frequency dimension.



    There are three outputs whose size is
    (10, 70, 1025)
    each, basically 10 spectrograms with size (70,1025).



    I would like to train this RNN by regression, whose structure is



    input_img = Input(shape=(70,3075 ) )
    x = Bidirectional(LSTM(n_hid,return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(input_img)
    x = Dropout(0.2)(x)
    x = Bidirectional(LSTM(n_hid, dropout=0.5, recurrent_dropout=0.2))(x)
    x = Dropout(0.2)(x)
    o0 = ( Dense(1025, activation='sigmoid'))(x)
    o1 = ( Dense(1025, activation='sigmoid'))(x)
    o2 = ( Dense(1025, activation='sigmoid'))(x)


    The problem is that output dense layers cannot take into account three dimensions, they want something like (None, 1025), which I don't know how to provide, unless I concatenate along the time dimension.



    The following error occurs:




    ValueError: Cannot feed value of shape (10, 70, 1025) for Tensor u'dense_2_target:0', which has shape '(?, ?)'




    Would be the batch_shape option useful in the input layer? I have actually tried it, but I've got the same error.










    share|improve this question

























      0












      0








      0








      I am trying to train a RNN by batches.
      The input input size
      (10, 70, 3075),
      where 10 is the batch size, 70 the time dimension, 3075 are the frequency dimension.



      There are three outputs whose size is
      (10, 70, 1025)
      each, basically 10 spectrograms with size (70,1025).



      I would like to train this RNN by regression, whose structure is



      input_img = Input(shape=(70,3075 ) )
      x = Bidirectional(LSTM(n_hid,return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(input_img)
      x = Dropout(0.2)(x)
      x = Bidirectional(LSTM(n_hid, dropout=0.5, recurrent_dropout=0.2))(x)
      x = Dropout(0.2)(x)
      o0 = ( Dense(1025, activation='sigmoid'))(x)
      o1 = ( Dense(1025, activation='sigmoid'))(x)
      o2 = ( Dense(1025, activation='sigmoid'))(x)


      The problem is that output dense layers cannot take into account three dimensions, they want something like (None, 1025), which I don't know how to provide, unless I concatenate along the time dimension.



      The following error occurs:




      ValueError: Cannot feed value of shape (10, 70, 1025) for Tensor u'dense_2_target:0', which has shape '(?, ?)'




      Would be the batch_shape option useful in the input layer? I have actually tried it, but I've got the same error.










      share|improve this question














      I am trying to train a RNN by batches.
      The input input size
      (10, 70, 3075),
      where 10 is the batch size, 70 the time dimension, 3075 are the frequency dimension.



      There are three outputs whose size is
      (10, 70, 1025)
      each, basically 10 spectrograms with size (70,1025).



      I would like to train this RNN by regression, whose structure is



      input_img = Input(shape=(70,3075 ) )
      x = Bidirectional(LSTM(n_hid,return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(input_img)
      x = Dropout(0.2)(x)
      x = Bidirectional(LSTM(n_hid, dropout=0.5, recurrent_dropout=0.2))(x)
      x = Dropout(0.2)(x)
      o0 = ( Dense(1025, activation='sigmoid'))(x)
      o1 = ( Dense(1025, activation='sigmoid'))(x)
      o2 = ( Dense(1025, activation='sigmoid'))(x)


      The problem is that output dense layers cannot take into account three dimensions, they want something like (None, 1025), which I don't know how to provide, unless I concatenate along the time dimension.



      The following error occurs:




      ValueError: Cannot feed value of shape (10, 70, 1025) for Tensor u'dense_2_target:0', which has shape '(?, ?)'




      Would be the batch_shape option useful in the input layer? I have actually tried it, but I've got the same error.







      python tensorflow keras recurrent-neural-network






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 18 at 20:21









      PhysPhys

      277




      277
























          2 Answers
          2






          active

          oldest

          votes


















          1














          In this instance the second RNN is collapsing the sequence to a single vector because by default return_sequences=False. To make the model return sequences and run the Dense layer over each timestep separately just add return_sequences=True to the second RNN as well:



          x = Bidirectional(LSTM(n_hid,  return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(x)


          The Dense layers automatically apply to the last dimension so no need to reshape afterwards.






          share|improve this answer
























          • This one worked for me, thanks!

            – Phys
            Jan 19 at 0:30



















          0














          To get the right output shape, you can use the Reshape layer:



          o0 = Dense(70 * 1025, activation='sigmoid')(x)
          o0 = Reshape((70, 1025)))(o0)


          This will output (batch_dim, 70, 1025). You can do exactly the same for the other two outputs.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54260919%2fpython-keras-valueerror-cannot-feed-value-of-shape-10-70-1025-for-tensor%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            In this instance the second RNN is collapsing the sequence to a single vector because by default return_sequences=False. To make the model return sequences and run the Dense layer over each timestep separately just add return_sequences=True to the second RNN as well:



            x = Bidirectional(LSTM(n_hid,  return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(x)


            The Dense layers automatically apply to the last dimension so no need to reshape afterwards.






            share|improve this answer
























            • This one worked for me, thanks!

              – Phys
              Jan 19 at 0:30
















            1














            In this instance the second RNN is collapsing the sequence to a single vector because by default return_sequences=False. To make the model return sequences and run the Dense layer over each timestep separately just add return_sequences=True to the second RNN as well:



            x = Bidirectional(LSTM(n_hid,  return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(x)


            The Dense layers automatically apply to the last dimension so no need to reshape afterwards.






            share|improve this answer
























            • This one worked for me, thanks!

              – Phys
              Jan 19 at 0:30














            1












            1








            1







            In this instance the second RNN is collapsing the sequence to a single vector because by default return_sequences=False. To make the model return sequences and run the Dense layer over each timestep separately just add return_sequences=True to the second RNN as well:



            x = Bidirectional(LSTM(n_hid,  return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(x)


            The Dense layers automatically apply to the last dimension so no need to reshape afterwards.






            share|improve this answer













            In this instance the second RNN is collapsing the sequence to a single vector because by default return_sequences=False. To make the model return sequences and run the Dense layer over each timestep separately just add return_sequences=True to the second RNN as well:



            x = Bidirectional(LSTM(n_hid,  return_sequences=True, dropout=0.5, recurrent_dropout=0.2))(x)


            The Dense layers automatically apply to the last dimension so no need to reshape afterwards.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jan 18 at 22:01









            nuricnuric

            4,9842320




            4,9842320













            • This one worked for me, thanks!

              – Phys
              Jan 19 at 0:30



















            • This one worked for me, thanks!

              – Phys
              Jan 19 at 0:30

















            This one worked for me, thanks!

            – Phys
            Jan 19 at 0:30





            This one worked for me, thanks!

            – Phys
            Jan 19 at 0:30













            0














            To get the right output shape, you can use the Reshape layer:



            o0 = Dense(70 * 1025, activation='sigmoid')(x)
            o0 = Reshape((70, 1025)))(o0)


            This will output (batch_dim, 70, 1025). You can do exactly the same for the other two outputs.






            share|improve this answer




























              0














              To get the right output shape, you can use the Reshape layer:



              o0 = Dense(70 * 1025, activation='sigmoid')(x)
              o0 = Reshape((70, 1025)))(o0)


              This will output (batch_dim, 70, 1025). You can do exactly the same for the other two outputs.






              share|improve this answer


























                0












                0








                0







                To get the right output shape, you can use the Reshape layer:



                o0 = Dense(70 * 1025, activation='sigmoid')(x)
                o0 = Reshape((70, 1025)))(o0)


                This will output (batch_dim, 70, 1025). You can do exactly the same for the other two outputs.






                share|improve this answer













                To get the right output shape, you can use the Reshape layer:



                o0 = Dense(70 * 1025, activation='sigmoid')(x)
                o0 = Reshape((70, 1025)))(o0)


                This will output (batch_dim, 70, 1025). You can do exactly the same for the other two outputs.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Jan 18 at 21:53









                Matias ValdenegroMatias Valdenegro

                31.3k45377




                31.3k45377






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54260919%2fpython-keras-valueerror-cannot-feed-value-of-shape-10-70-1025-for-tensor%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Homophylophilia

                    Updating UILabel text programmatically using a function

                    Cloud Functions - OpenCV Videocapture Read method fails for larger files from cloud storage