Skip to content

Create magic stub URLs for spoofed testing #18

@ericbeland

Description

@ericbeland

Problem Description

When we are running large scale system tests, we generally don't need to be making external tests in order to vett most of our system. Our concerns are the volume of data flowing through, it's correctness, scalability, etc. We want to understand things like sizing, etc. We don't want to generate a pointless large bandwidth bill, and aiming at a real site can be inconvenient for them/us, and of questionable legality.

Proposal

We create a mode where the system talks to "itself" to start scaling. What I mean by that, is, instead of sending the request out to the web, it stays local, or gets spoofed data. So that way, we could run massive tests without a massive bandwidth bill. Pages could be "served" by reading local files from the proxy.

Customer's might want to even do this themselves (for demos, learning, testing)

Maybe have the proxy do the "read" and stub them in from there.
Maybe it could just do it for some made up url automatically

  • maybe we can make proxy return stubbed response on all outgoing requests, is that what you mean?
  • yeah
  • Then it can never "go down" and we don't have to scale "it"

There are two ways I think we could approach this.

  1. Answer a specific magic URL or URL regexp with local stub data
  2. Answer every external request in a certain mode with stub data

I think #1 is the best/easiest start for now, but #2 is pretty interesting. #1 requires the user's test script be written to talk to the stub, which is a fine scenario for these things. We could use this magic URL to fake a 10 million user test, potentially, and know that the rest of the system is up to snuff.

In theroy, it is possible the stub could even generate the metrics without "having" or "returning" the page at all--at the end of the day, the rest of the system only knows about the proxy's info via the metrics. However, I think we want vu's to make "requests" to the stub URL(s) to generate their traffic, so the pattern/data seems realistic.

If we get to #2 at some point, we'd let customers do a "capture" and run against their own fake site and match the requests. Blazemeter does something like this.

Alternatives

We could run a local server, but that has CPU / memory cost, and we don't want something that can "go down" in this case.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions