-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propogate end_stream from downstream http/2 requests #7196
Propogate end_stream from downstream http/2 requests #7196
Conversation
Thanks! Generally strong +1 on this, great find. :) This needs a test in I'm also wondering if we can just change
to if self.flow.request.stream and not event.end_stream: as a simpler patch, but that is easy to test out once we have a test. Thanks! |
Thank you for the comment - the suggested change does make sense and could be more elegant, ill try to make a test and then verify if that change works. What did you mean by "and ideally the same logic applied to responses" - this was regarding tests or something else? |
The PR currently removes empty data frames for requests, but it doesn't do so for responses. We basically have the same again here for responses: mitmproxy/mitmproxy/proxy/layers/http/__init__.py Lines 412 to 415 in db61b4b
This should be fixable in exactly the same way, so I suggest we do both in one go. :) |
@mhils sadly I gave it a shot but I'm totally unfamiliar with the playbook apis and Chat GPT was also not up to the task so Ill need some more guidance.
So how can i peek at (or retrieve?) the data sent to the upstream server? What should go in place of |
Hi @rubu!
Not really, but this here is relevant.
My recommendation would be to skip ChatGPT for this. For OpenConnection you need a Try making a copy of << http.HttpRequestHeadersHook(flow)
>> reply() into << http.HttpRequestHeadersHook(flow)
>> reply(side_effect=enable_streaming) That should get you onto the right track. If not let me know and I'm happy to lend a hand. :) |
*) add test to check if end_stream is propogated on request / response *) simplify the patch
Hi @mhils , ok i figured how to make the test to cover both directions (request and resposne), but the response fix was a bit more complex since applying the same logic broke a websocket test - the bytes on the wire were not deframed properly - which seemed logical since your proposed strategy of avoiding the call to Do this PR not seem good or is there something that should be adjusted - I am no a python person so I'm not sure if the |
Ok, now i actually see the code coverage thing failing due to the extra if:
What is the proper approach here? Move the self.flow.reponse inside the whole check and just leave the method as bool ? I wanted the |
@rubu: Thanks! This looks great so far, thank you for doing the majority of the grunt work with tests etc. I can't give you an immediate answer without digging deeper, let me play around with this in the coming days. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks again! I ended up adapting the WebSocket test, now the actual code diff is super simple. 🎉
Description
I encountered an issue where mitmdump was failing with a request that was working in the browser. It was a GET request with no body but the server was throwing an error about GET request having a body.
After looking at packet captures the difference between mitmdump and browser was that browser was sending
END STREAM
on theHEADERS
frame whereas mitmdump was sending an emptyDATA
frame withEND STREAM
.I patched mitmdump to propogate
END STREAM
if it is present on the downstreamHEADERS
frame.Tests pass and my test case works, could anyone elaborate if this can cause other issues? If so I can adjust this to get to a state where this can be merged - since this is breaking for me ill roll with a local patch now but it would be great to use an official release.
Checklist