NLU module extracts entire sentence instead of slots


I have a bot created based on the “Welcome Bot” template provided by Botpress. In this bot, NLU extraction works correctly only some times, but most of the times the entire sentence i.e entire user input is getting extracted as a slot.

The following is description of the NLU:

*I have provided more than 20 utterances for two intents.

  • There are two slots in each intent. There is a single marked slot in each utterance.

  • After training the chatbot, the NLU extraction module extracts the entire sentence and shows this as an extracted slot.

  • While testing in the emulator, I display the slot using the variable {{session.slots.SlotName.value}} and also check it in the debugger (in the RAW JSON data structure). The value displayed in the emulator chat window and that in the data structure are the same - the entire user input.

Very rarely is it able to extract the slot that it is actually expected to pick from the user input.

What could be possibly going wrong? Kindly suggest how to fix this.

Thanks in advance,

Hi @Vishwa !

Can you please show me your training utterances and the user input that reproduce this behavior? You can also post your bot on this thread if you prefer.

Keep in mind that slot extraction is a machine learning task and can sometimes produce unpredicted results.

There are ways to make it more robust which I can better help you with if I see your training utterances.

Another possibility is to add a manual validation.

Imagine a bot that allows you to buy fruits.

Your user would say: “I would like to buy a banana”, where banana is the slot fruit_to_buy.

You could make your bot ask something like:

“Is ‘banana’ the fruit you want to buy ? (y/n)” ----> The user would say yes.

“Is ‘I would like to buy a banana’ the fruit you want to buy ? (y/n)” ----> The user would say no.

Hope this helps,


1 Like

Hello @frank_levasseur,
Thank you for your response.

I am enclosing screenshots of utterances (most of them) along with slots. I have also enclosed a sample user input which produces this behavior.

The utterances are for the intent " new_place_details" with two slots name_of_place and location_of_place.

The user input which produces this behavior is the last input at the end of the chat session where the bot reproduces the entire user sentence and not the slot.

My observation is that when the slot extracted is correct, I can see the NLU intent (i.e. TOP intents) along with the confidence levels (in %). But, when the slot extraction is incorrect, I do not see the NLU Top intent at all (in the screenshot enclosed above).

Kindly let me know what is going wrong here.

Thanks in advance.

Hi again!

Does name_of_place and location_of_place have associated entities or are they type any slots?

Creating an entity for possible places will highly help with slot extraction.

Also, on your screenshot, can you show me the ray nlu payload instead of summary? Thank you


1 Like

Hello @frank_levasseur,

Thanks for the response.

There are two entities of the type list; one associated with name_of_place and other with location_of_place.
Both the slots have type “any” as well as the corresponding list.
Can creating the a slot with two associated entities lead to any issue?

I am enclosing screenshots for the raw NLU payload. Please check the same.

From the raw JSON data, it can be seen that when the NLU extracts the entire sentence the entity is of type any even though there is an associated list with it.
So I will try with both the slots as list entities.

Thanks in advance,

Hi again @Vishwa,

Giving the type any to a slot is what makes more unpredictable.

Long story short, our slot tagging is machine learning based and still has room for improvement. To make it more resilient, we deliberately make it overfit on the result of entity extraction. The entity extraction is purely rule-based and is not a machine learning task, so it’s pretty robust.

When a slot has only types referring to entities, it pretty much often extract the same thing as the entity extractor.

But when a slot has the any type, it gets an quite high degree of freedom leading to sometimes weird results.

When possible, you should ideally not use the any type for a slot. When you can’t really do otherwise, make sure your bot has some sort of validation that goes like this:

User: It is located in city of Montreal
Bot: Is "It is located in city of Montreal" the place you visited?
        [Yes] [No]
User: No
Bot: Then type exactly the place you've visited.
User: City of Montreal
Bot: Is "City of Montreal" the place you visited?
        [Yes] [No]
User: Yes

Also, it is way harder for our slot extractor to predict the correct slots when there is 2 on the same intent. I don’t see why you need two different slots here. Try to simplify your needs to one slot, let’s say place with two entities (one list entity for location and one for name). This will help the slot extraction a lot.

Of course, giving more training utterances with different configuration of possible name_of_place or location_of_place will highly help your bot.

Basically, try the followings:

  1. Start by using only one slot place with 2 entities.
  2. Remove type any from your slot.
  3. Add more training utterances with many configuration of places.
  4. If you can’t do step 1, 2 or 3 for some reason, add a manual validation from your bot (with yes/no questions).

Hope this information helps,

Good luck with your bot and feel free to post again on this forum if you have any other question.


PS: Many thanks for your contribution to this forum. I feel like this topic contains really valuable information for our community.

1 Like

Hello @frank_levasseur,

Thank you very much for the response.
It is definitely helpful.