Simulate Alexa interaction for testing and debugging (beta). It is highly inspired by virtual-alexa.
alexam can mock sevaral types of request in alexa-skill-kit like LaunchRequest
, IntentRequest
and Alexa.Presentation.APL.UserEvent
.
And can send it to your handler. So you can test your handler easily.
alexam can retain session attributes until end session. So you can test multi-turn interaciton.
You can configure Alexam easily. new AlexamBuilder().setHandler(handlerObj).build()
is the minimum configuration.
And if you want to set pre-defined context
or session
object, you can build and set these easily.
You can build LaunchRequest
, IntentRequest
, SessionEndedRequest
and Alexa.Presentation.APL.UserEvent
type request easily. And you can build your own custom request.
Type of mock request built by Alexam is same as RequestEnvelope which is the general request interface in official sdk. And response from Alexam is ResponseEnvelope which is the general response interface in official sdk also. It means you can simulate actual execution environment with Alexam.
$ yarn add -D alexam
$ npm install --save-dev alexam
Following is the minimum example that simulates skill interaction with alexam. If you want to know more actual usage of alexam, please see example test cases.
import { AlexamBuilder, LambdaHandler } from "alexam";
import { SkillBuilders, HandlerInput, RequestHandler } from "ask-sdk";
// Configure handler
const LaunchRequestHandler: RequestHandler = {
canHandle(handlerInput: HandlerInput): boolean {
return handlerInput.requestEnvelope.request.type === "LaunchRequest";
},
handle(handlerInput: HandlerInput) {
const speechText = "Welcome to the Alexa Skills Kit, you can say hello!";
return handlerInput.responseBuilder.speak(speechText).getResponse();
},
};
const handler = SkillBuilders.custom()
.addRequestHandlers(LaunchRequestHandler)
.lambda();
Configure alexam
const alexam = new AlexamBuilder()
.setHandler(new LambdaHandler(handler))
.build();
// Build mock LaunchRequest
const launchRequest = alexam.requestFactory.launchRequest();
// Send mock LaunchRequest to hadndler by using alexam.
Promise.resolve(alexam.send(launchRequest)).then(response => {
console.log(`response is ${JSON.stringify(response)}`); // -> response is {"version":"1.0","response":{"outputSpeech":{"type":"SSML","ssml":"<speak>Welcome to the Alexa Skills Kit, you can say hello!</speak>"}},"userAgent":"ask-node/2.11.0 Node/v17.3.0","sessionAttributes":{}}
});
To send request to your handler from alexam, you need to
- Configure alexam with your handler
- Build mock request by SkillRequestFactory
- Call
send()
with mock request
At first, you should build new alexam object from AlexamBuilder
. You can build it just calling setHandler()
then build()
method as minimum configuration.
You can add more settings if you want. Please see Reference section.
You can build mock request by using SkillRequestFactory
. Please see Reference section how to build these.
So then, you can send mock request to your handler by alexam.send()
method. Response object is same as ResponseEnvelope which is the general response interface in official sdk.
Following code is the test example with jest framework. If you want to know more actual usage of alexam, please see example test cases.
import {
Alexam,
AlexamBuilder,
LambdaHandler,
SkillRequestFactory,
Session,
} from "alexam";
import { handler } from "../src";
import { ui } from "ask-sdk-model";
test("LaunchRequest", async () => {
expect.assertions(1);
const handlerObj = new LambdaHandler(handler);
const alexam: Alexam = new AlexamBuilder().setHandler(handlerObj).build();
const launchRequest = alexam.requestFactory.launchRequest();
await alexam.send(launchRequest).then(res => {
expect((res.response.outputSpeech as ui.SsmlOutputSpeech).ssml).toMatch(
"Welcome to the Alexa Skills Kit, you can say hello!",
);
});
});
You can run multiple tests with the alexam object.
But alexam retains session information until session end. So you should call resetSession()
method by each test case.
describe("Multiple test cases with one alexam object for demonstrating resetSession", () => {
const alexam = new AlexamBuilder()
.setHandler(new LambdaHandler(handler))
.build();
afterEach(() => {
alexam.resetSession();
});
test("Retain session attributes", async () => {
expect.assertions(1);
const requestFactory = alexam.requestFactory;
const countUpIntent = requestFactory.intentRequest("CountUpIntent");
return alexam
.send(countUpIntent)
.then(() => {
const countUpIntent2 = requestFactory.intentRequest("CountUpIntent");
return alexam.send(countUpIntent2);
})
.then(res => {
expect(res.sessionAttributes?.count).toBe(2);
});
});
test("Use pre defined session attributes", async () => {
alexam.skillContext.setSession(
new Session({
applicationId: alexam.skillContext.applicationId,
attributes: { count: 10 },
}),
);
const countUpIntent = alexam.requestFactory.intentRequest("CountUpIntent");
const resp = await alexam.send(countUpIntent);
expect(resp.sessionAttributes?.count).toBe(11);
});
});
See API doc.
virtual-alexa supports remote debugging but alexam doesn't. I recommend you to debug actual code by using ask-cli or simulator on ask developer console. alexam is for testing or debugging local code while implementing.
virtual-alexa supports utter
command which can test by actual utterance.
It sounds some benefits but doesn't support alexam (for now) because
- Complecate for multilingual skill
If your alexa skill would be multilingual, what language will you choose for actual test? If you think it's important to check whether work correctly by actual utterance, will you prepare whole supporting language for each test case? It sounds complecate for me. Sometimes it would be noisy for designing test suite. You would think test case with actual utterance would be easy to understand the purpose of the case, but I think it would be same if intent name is proper.
- Fail test unexpectedly if sample utterance is modified
If using actual utterance, you should run whole tests if just modifing sample utterances because some test cases includes utterance you removed. I think those kind of tests are basically for checking logics, shouldn't check detail interfaces. At least, alexam is designed for it.
btw, actually I agree with important to test by actual utterance. But I want you to use dialog command of ask-cli. It is similar as actual environment, you can also use built-in intetns and slots. I just mean alexam isn't suitable for it.