Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing a listener for the tcp/http responses #54

Open
darlintonprauchner opened this issue Oct 24, 2018 · 8 comments
Open

Missing a listener for the tcp/http responses #54

darlintonprauchner opened this issue Oct 24, 2018 · 8 comments

Comments

@darlintonprauchner
Copy link

Hi, right now I can listen to the requests and bypass them, but I cannot listen to what the server responds (when I bypass).

That would be useful for me to actually log the response, and use it as a mock on the next identical request

@moll
Copy link
Owner

moll commented Oct 24, 2018

Hey! I reckon #51 is applicable to you, too. The short story is that Mitm.js is foremost a testing library for mocking outgoing requests and not really to eavesdrop on yourself. Having said that, the intercept-and-make-a-new-bypassed-request solution should achieve what you want.

Mind me asking, why do you need to first listen to requests at all? Shouldn't you know what requests your code is doing to mock them straight away? ^_^

@darlintonprauchner
Copy link
Author

darlintonprauchner commented Oct 24, 2018

Hi, I am looking for automated mock generation for functional testing, something like this:

1st request -> listen to the parameters, listen to the response, records on a file
2nd identical request -> listen to the parameters, it matches a previous request, send back the mock recorded

#51 is very similar to what I need

@ianwsperber
Copy link

Hi @darlintonprauchner! As I mentioned recently in #51 I'm currently in the middle of a library implementing this functionality on top of Mitm. It should be in a shareable state within the week. I can send you a link when it is complete.

@moll
Copy link
Owner

moll commented Oct 24, 2018

Hi, I am looking for automated mock generation for functional testing, something like this:

Wouldn't that couple your tests to another service? If you're okay with that, why not let all of them go to that service?

@darlintonprauchner
Copy link
Author

thanks @ianwsperber !

@moll - what I want is to be able to run functional tests without depending on other services (mocking them all); i can either generate the mocks myself with something like sinon, or, create a tool that self-generates them for me before I commit them... mitm seems to be the closest to a solution from what i searched so far

@moll
Copy link
Owner

moll commented Oct 25, 2018

That's certainly what I made Mitm.js for --- for mocking remote services. Are you after generating mocks once and then to carry on tweaking them manually or always autogenerating them at compile time?

I've personally never needed to generate them automatically. I tend to write minimal mocks while implementing the functionality in the first place. If it's a simple one-request-one-response service, I go with plain Mitm and perhaps a shared function to set headers and serialize a response. If it's a more complicated flow, I send Mitm's request to Express and use its convenient routing to respond (smth close to mitm.on("request", Express()). The latter is one of the reasons I find Nock's approach to mocking inflexible. Here's how I do it: https://github.com/rahvaalgatus/rahvaalgatus/blob/master/test/mitm.js

@darlintonprauchner
Copy link
Author

I plan to commit the generated mocks with the code; and when the code changes, delete the affected mocks, let them be regenerated, and commit the new versions.

We could possibly generate once and tweak them after, whatever helps me spend less time writing mocks and more time writing tests :)

@moll
Copy link
Owner

moll commented Oct 25, 2018

Umm, isn't deleting the mocks defeating the point? Aren't they there to assert your code is making right requests and the code behaving correctly given known-good responses? Should the 3rd party API response change slightly, wouldn't manually changing the mocks, seeing related tests fail and only then fixing the code, be in line with the goal of having tests in the first place?

This reminds me of autogenerated UI [browser] tests (through point-and-click test creation tools) that result in an unmaintainable mess of overly verbose DOM selectors that break the moment a button's class or title text is slightly modified. I can't help but think autogenerated or recorded mocks have the same fate --- replaying irrelevant minutia like descriptive headers or response body fields (when returning structured data) that code under test never cares for. Naturally, I'm open to be convinced otherwise. ^_^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants