When a user enters a prompt in st.chat_input, it is not possible to disable the input while an LLM is generating the response to that input.
This leads to the possibility of the user 'interrupting' the model output and messing up the conversation structure.
I've tried binding the disable argument to a variable inside the st.session_state, but it does not work.