Exam Objectives
Accept data in JSON format (in JavaScript, in an AJAX callback); use content negotiation to deliver different data formats to clients; define actions and parameters to handle data binding; use HttpMessageHandler to process client requests and server responses; implement dependency injection, along with the dependency resolver, to create more flexible applications; implement action filters and exception filters to manage controller execution; implement asynchronous and synchronous actions; implement streaming actions; implement SignalR; test Web API web servicesQuick Overview of Training Materials
Exam Ref 70-487 - Chapter 4.2
[Book] Designing Evolvable Web API's with ASP.NET - Chapter 13 (Model binding)
[MSDN] Web API Documentation:
- JSON and XML Serialization in ASP.NET Web API
- Content Negotiation in ASP.NET Web API
- Parameter Binding in ASP.NET Web API
- HTTP Message Handlers in ASP.NET Web API
- Dependency Injection in ASP.NET Web API 2
- Exception Handling in ASP.NET Web API
- Testing and Debugging ASP.NET Web API
[MSDN] ASP.NET Web API 2: Http Message Lifecycle poster
[Blog] Web API 2 using ActionFilterAttribute ...
[Blog] ASP.NET MVC and Web API - Comparison of Async / Sync Actions
[CodeProject] Web API Thoughts 1 of 3 - Data Streaming
[CSharpCorner] Asynchronous Video Live Streaming with ASP.NET Web APIs 2.0
[Blog] Testing routes in ASP.NET Web API
[Book] Designing Evolvable Web API's with ASP.NET - Chapter 13 (Model binding)
[MSDN] Web API Documentation:
- JSON and XML Serialization in ASP.NET Web API
- Content Negotiation in ASP.NET Web API
- Parameter Binding in ASP.NET Web API
- HTTP Message Handlers in ASP.NET Web API
- Dependency Injection in ASP.NET Web API 2
- Exception Handling in ASP.NET Web API
- Testing and Debugging ASP.NET Web API
[MSDN] ASP.NET Web API 2: Http Message Lifecycle poster
[Blog] Web API 2 using ActionFilterAttribute ...
[Blog] ASP.NET MVC and Web API - Comparison of Async / Sync Actions
[CodeProject] Web API Thoughts 1 of 3 - Data Streaming
[CSharpCorner] Asynchronous Video Live Streaming with ASP.NET Web APIs 2.0
[Blog] Testing routes in ASP.NET Web API
Accept JSON data
The exam ref spends an awful lot of time for this objective explaining how to build a razor page with Javascript functionality. I doubt that is really what this objective is about. If you need a refresher, I covered ajax in my "Implement a callback" post for 70-480, and Razor in the "Compose the UI" post for 70-486.
The key to being able to accept JSON input into your Web API is to create a model. A model is a Plain Old Clr Object (POCO) that represents the data you want to consume. In the Dog API I created for the first Web API section, I have a class representing a dog with a few fields on it:
I changed the Post() method on the controller to accept a "Dog" object rather than parsing a string from the message body:
WebAPI out of the box can do model binding for JSON input, you just have to ensure that the "Content-Type" header for the incoming request is "application/json". Forget to do this and .NET will likely throw an "UnsupportedMediaTypeException"
The key to being able to accept JSON input into your Web API is to create a model. A model is a Plain Old Clr Object (POCO) that represents the data you want to consume. In the Dog API I created for the first Web API section, I have a class representing a dog with a few fields on it:
public class Dog { public string name; public Owner owner; public List<Toy> toys; public List<Dog> friends; }
I changed the Post() method on the controller to accept a "Dog" object rather than parsing a string from the message body:
// POST: api/Dogs public HttpResponseMessage Post(Dog dog) { _repo.Add(dog); return Request.CreateResponse(HttpStatusCode.Created, _repo.Count - 1); }
WebAPI out of the box can do model binding for JSON input, you just have to ensure that the "Content-Type" header for the incoming request is "application/json". Forget to do this and .NET will likely throw an "UnsupportedMediaTypeException"
Use Content Negotiation
I covered the use of "media formatters" in the post on Designing an API (in the segment "Choose an appropriate format"). This basically covered how content negotiation works, but I'll cover it again here just in case I missed anything.
Content negotiation is the process by which the service chooses which media type, charset, encoding, and language to return to the client. Media types (defined with the "Accept" header) are generally going to be JSON or XML, but as I demonstrated before it is possible to create custom media types using formatters. The character set is basically the string encoding used in the message, such as UTF-8. The encoding basically specified which compression algorithm to use on the message (if any). The language can be specified as either just the language (en) or the locale (en-US). The "Accept-Charset", "Accept-Encoding", and "Accept-Language" headers specify these last three, respectively.
I borrowed the code from the documentation on content negotiation and tweaked it to work with my Dog API. What I found interesting is that while the code seems to imply that I should expect an error if I pass an Accept header with an unsupported media type, in reality it seems to just default to the JSON formatter:
Even after monkeying around with the global config, and setting the supported media types on the formatters, I still got the same result. ¯\_(ツ)_/¯
Content negotiation is the process by which the service chooses which media type, charset, encoding, and language to return to the client. Media types (defined with the "Accept" header) are generally going to be JSON or XML, but as I demonstrated before it is possible to create custom media types using formatters. The character set is basically the string encoding used in the message, such as UTF-8. The encoding basically specified which compression algorithm to use on the message (if any). The language can be specified as either just the language (en) or the locale (en-US). The "Accept-Charset", "Accept-Encoding", and "Accept-Language" headers specify these last three, respectively.
I borrowed the code from the documentation on content negotiation and tweaked it to work with my Dog API. What I found interesting is that while the code seems to imply that I should expect an error if I pass an Accept header with an unsupported media type, in reality it seems to just default to the JSON formatter:
[Route("api/Dogs/Gizmo")] public HttpResponseMessage GetGizmoDog() { var dog = new Dog(){ name = "Gizmo"}; IContentNegotiator negotiator = this.Configuration.Services.GetContentNegotiator(); ContentNegotiationResult result = negotiator.Negotiate( typeof(Dog), this.Request, this.Configuration.Formatters); if (result == null) { var response = new HttpResponseMessage(HttpStatusCode.NotAcceptable); throw new HttpResponseException(response); } return new HttpResponseMessage() { Content = new ObjectContent<Dog>( dog, // What we are serializing result.Formatter, // The media formatter result.MediaType.MediaType // The MIME type ) }; }
Even after monkeying around with the global config, and setting the supported media types on the formatters, I still got the same result. ¯\_(ツ)_/¯
One last interesting use case for content negotiation is for API versioning. Basically, instead of including the api version in the URL or some version header, you include it as part of the media type passed to the "Accept" header. I've seen this approach used in both of the referenced PluralSight courses, as well as this blog post (which is just one particular, well written example, they are everywhere...): REST implies Content Negotiation
Action and Parameter data binding
Binding is the process of assigning values passed into the API to in memory variables that can be acted upon. The Parameter Binding in ASP.NET Web API documentation covers the various mechanisms for how this occurs. It includes a number of good code examples, so I'm not going to include any here. The article it references, How to bind to custom objects in action signatures in MVC/WebAPI, points out some of the similarities and difference between Web API and MVC, and is also worth a look.
The simplest case is for primitive types (int, bool, etc.) and a few well know reference types related to time, strings, guids, etc. These types are pulled from the URI, either as a route parameter or a query parameter. It is possible to force Web API to pull simple types from the body of the message using the [FromBody] annotation. This annotation can be used for at most one parameter.
For complex types, there are a couple of potential approaches. The first is type converters, which treat the complex type as a simple type and parse it from a string representation. The complex type is decorated with a [TypeConverter(typeof(Converter))] annotation, where "Converter" is a class that extends the "TypeConverter" class and overrides the "CanConvertFrom" and "ConvertFrom" methods.
Another way of binding complex types is through the use of a model binder. A model binder is created by implementing the IModelBinder interface (the one from System.Web.Http.ModelBinding, not System.Web.MVC). This interface has one method to implement, BindModel, which takes the action context and a binding context as parameters. These contexts provide access to the request body, query parameters, value providers, and other information to facilitate creating a model object from the request, which, if successful, is assigned to the bindingContext.Model field (the return type is a bool indicating success or failure).
A custom model binder can be set in a number of ways. First, using the [ModelBinder(typeof(xx))] annotation on either the model class the binder works on, or the parameter in the action method that is expecting an instance of that class. Alternatively the model binder class can be set on the HttpConfiguration object (as part of the Services collection).
One of the mechanisms used by model binding is the value provider. A value provider is basically a simple key value store, which by default will fetch values from the query string and route. The documentation illustrates how to create a custom value provider that gets data from cookies. This value provider is then added to the HttpConfiguration.Services collection, which will allow it to be used on any request. Alternatively, it can use designated with an annotation on an action parameter, which restricts the model binder for that parameter to use only that specific value provider.
A lower level abstraction that is used by model binding is HttpParameterBinding. The documentation provides an example of using this approach in order to bind ETag header values to an ETag model. I think the motivation for using this approach is that you get finer grained control, such as being able to set the WillReadBody return type based on whether the model binding will read the message body or not (Web API is real particular about the body being read only once). It also lends itself to being set globally as part of the "ParameterBindingRules" collection on the HttpConfiguration object.
The simplest case is for primitive types (int, bool, etc.) and a few well know reference types related to time, strings, guids, etc. These types are pulled from the URI, either as a route parameter or a query parameter. It is possible to force Web API to pull simple types from the body of the message using the [FromBody] annotation. This annotation can be used for at most one parameter.
For complex types, there are a couple of potential approaches. The first is type converters, which treat the complex type as a simple type and parse it from a string representation. The complex type is decorated with a [TypeConverter(typeof(Converter))] annotation, where "Converter" is a class that extends the "TypeConverter" class and overrides the "CanConvertFrom" and "ConvertFrom" methods.
Another way of binding complex types is through the use of a model binder. A model binder is created by implementing the IModelBinder interface (the one from System.Web.Http.ModelBinding, not System.Web.MVC). This interface has one method to implement, BindModel, which takes the action context and a binding context as parameters. These contexts provide access to the request body, query parameters, value providers, and other information to facilitate creating a model object from the request, which, if successful, is assigned to the bindingContext.Model field (the return type is a bool indicating success or failure).
A custom model binder can be set in a number of ways. First, using the [ModelBinder(typeof(xx))] annotation on either the model class the binder works on, or the parameter in the action method that is expecting an instance of that class. Alternatively the model binder class can be set on the HttpConfiguration object (as part of the Services collection).
One of the mechanisms used by model binding is the value provider. A value provider is basically a simple key value store, which by default will fetch values from the query string and route. The documentation illustrates how to create a custom value provider that gets data from cookies. This value provider is then added to the HttpConfiguration.Services collection, which will allow it to be used on any request. Alternatively, it can use designated with an annotation on an action parameter, which restricts the model binder for that parameter to use only that specific value provider.
A lower level abstraction that is used by model binding is HttpParameterBinding. The documentation provides an example of using this approach in order to bind ETag header values to an ETag model. I think the motivation for using this approach is that you get finer grained control, such as being able to set the WillReadBody return type based on whether the model binding will read the message body or not (Web API is real particular about the body being read only once). It also lends itself to being set globally as part of the "ParameterBindingRules" collection on the HttpConfiguration object.
Use HttpMessageHandler
This section on HttpMessageHandler is primarily derived from the HTTP Message Handlers in ASP.NET Web API documentation. One of the first things I did when starting this section was dredge up my previous blog post regarding handlers in MVC. It took a couple attempts to find the bugger, but there is was all the way back in February of 2015, Design HTTP Modules and Handlers. I mention it because, while I am sometimes inclined to hand wave stuff that I've basically covered elsewhere, HttpMessageHandler is not like the handlers I covered for 70-486. Whereas only one handler ever touches a request in the MVC pipeline, a chain of delegating handlers can manipulate the request in Web API. It isn't really like modules either... while multiple modules can interact with an MVC request, these are keyed on events, whereas delegating handlers are chained together.
Message handlers are a good fit for cross cutting concerns that need to interact with every request, such as modifying request or response headers, or doing validation. A custom handler is created by extending the DelegatingHandler class and overriding the SendAsync method. The idiom for implementing SendAsync includes the following steps:
Sending the response directly without calling base.SendAsync() effectively short circuits the handler chain, which may be appropriate for a validation handler (requests failing validation require no further processing). Custom handlers are inserted into the pipeline by adding them to the HttpConfiguration.MessageHandlers collection. Order matters here; the first (index 0) handler gets called first, and the request gets passed down the line (assuming everyone follows the above outline for SendAsync), until the last handler in the collection, which will then be the first to see the response message.
Handlers added to the HttpConfiguration object apply globally. It is possible to add a custom handler to a specific route. This is done at the time the route is defined with the MapHttpRoute method, by setting the handler parameter equal to a new instance of the custom handler. When the custom handler is added directly this way, it replaces the entire handler chain, so the default HttpControllerDispatcher handler will not be called. It is possible to recreate a handler chain with the custom handler inserted (which is demoed in the documentation aaaaalll the way at the bottom).
Message handlers are a good fit for cross cutting concerns that need to interact with every request, such as modifying request or response headers, or doing validation. A custom handler is created by extending the DelegatingHandler class and overriding the SendAsync method. The idiom for implementing SendAsync includes the following steps:
- Process the request
- Call base.SendAsync to forward the request up the handler chain
- Await the inner handler response
- Process the response and return it.
Sending the response directly without calling base.SendAsync() effectively short circuits the handler chain, which may be appropriate for a validation handler (requests failing validation require no further processing). Custom handlers are inserted into the pipeline by adding them to the HttpConfiguration.MessageHandlers collection. Order matters here; the first (index 0) handler gets called first, and the request gets passed down the line (assuming everyone follows the above outline for SendAsync), until the last handler in the collection, which will then be the first to see the response message.
Handlers added to the HttpConfiguration object apply globally. It is possible to add a custom handler to a specific route. This is done at the time the route is defined with the MapHttpRoute method, by setting the handler parameter equal to a new instance of the custom handler. When the custom handler is added directly this way, it replaces the entire handler chain, so the default HttpControllerDispatcher handler will not be called. It is possible to recreate a handler chain with the custom handler inserted (which is demoed in the documentation aaaaalll the way at the bottom).
Implement Dependency Injection, Dependency Resolver
The Dependency Injection in ASP.NET Web API 2 documentation outlines the built in facilities for using existing IoC containers such as Castle Windsor, Spring.Net, Autofac, Ninject, Unity, etc. The tutorial uses Unity (not the game development framework), but any of the others would have worked just as well.
Dependency Injection (DI) is an approach that aims to separate the concerns of object instantiation and object use. Basically, instead of our methods instantiating new instances of collaborator classes, all of our collaborators are passed into us by our callers. The canonical "DI 101" example is the database connection.
In most of the MVC tutorials I've ever seen, the usually start out by creating a Local DB or perhaps Sql Server Express database with some sample data, and then wiring up Entity Framework to talk to this datastore. The next step usually involves writing a quick and dirty repository class for the controllers to use, which they create directly using the new keyword. I did a series of posts waaaaay back in 2014 that cover this (I was wet behind the ears, I'm not saying it was great implementation but the concepts are there): ASP.NET MVC Unit Testing Part 1.
The problem with newing up dependencies like this is that they are now hard wired implementation details, and they can't be swapped out without making a code change. This makes testing unnecessarily difficult and annoying, and limits reuse. A better pattern is to pass in these kinds of dependencies.
Now, passing in a SqlRepository dependency doesn't really help you if the methods using that dependency expect exactly that type, which is why interface based programming is important to making DI effective. If our controller expects an IRepository, then we can use our SqlRepository in the production code and a MockRepository for unit testing. There are a couple ways the actual implementation class can be passed in, with constructor injection and setter injection being the most common.
While adding constructor parameters for all your dependencies solves the issue of inflexible implementation expectations, it can still be problematic. For one, if you need to change the constructor signature for any reason, you need to either add an additional constructor, or change the existing one and break all the current consumers of that class. This is one of the problems that Inversion of Control (IoC) containers seek to solve.
Rather than calling the constructor when you need an instance of a class (and keeping track of all the collaborators you need to pass in to get a valid instance), you ask the container to resolve the dependencies for you. You get your instance, and if the constructor changes, the only class that needs to care is the container. This is probably an overly simplistic take on DI and IoC, that I think it gets the basic gist across.
One interesting contrast of Web API to MVC is the way controllers are instantiated. In MVC (at least in the slightly older versions I remember), a controller must have a parameterless constructor (at least with the default controller factory). In Web API, the dependency resolver functionality allows you to create constructors for Web API controllers with parameters. The parameters are then resolved by adapter class for the IoC container you chosen for your project (or roll your own implementation of IDependencyResolver, if you really want to...).
The documentation provides a complete tutorial, so I'm not going to paste all the code samples, but I did want to point out one thing that tripped me up for a second. The Web API controller looks like this:
There is a single constructor taking a IProductRepository instance. This dependency will need to be passed in when Web API creates a controller instance to serve a request. Web API will try to resolve the dependencies of parametered constructors, and fall back to a default constructor if those dependencies can't be resolved. Configuration is straightforward:
"UnityResolver" is a class implementing the IDependencyResolver interface that acts as a wrapper around a Unity container.
Dependency Injection (DI) is an approach that aims to separate the concerns of object instantiation and object use. Basically, instead of our methods instantiating new instances of collaborator classes, all of our collaborators are passed into us by our callers. The canonical "DI 101" example is the database connection.
In most of the MVC tutorials I've ever seen, the usually start out by creating a Local DB or perhaps Sql Server Express database with some sample data, and then wiring up Entity Framework to talk to this datastore. The next step usually involves writing a quick and dirty repository class for the controllers to use, which they create directly using the new keyword. I did a series of posts waaaaay back in 2014 that cover this (I was wet behind the ears, I'm not saying it was great implementation but the concepts are there): ASP.NET MVC Unit Testing Part 1.
The problem with newing up dependencies like this is that they are now hard wired implementation details, and they can't be swapped out without making a code change. This makes testing unnecessarily difficult and annoying, and limits reuse. A better pattern is to pass in these kinds of dependencies.
Now, passing in a SqlRepository dependency doesn't really help you if the methods using that dependency expect exactly that type, which is why interface based programming is important to making DI effective. If our controller expects an IRepository, then we can use our SqlRepository in the production code and a MockRepository for unit testing. There are a couple ways the actual implementation class can be passed in, with constructor injection and setter injection being the most common.
While adding constructor parameters for all your dependencies solves the issue of inflexible implementation expectations, it can still be problematic. For one, if you need to change the constructor signature for any reason, you need to either add an additional constructor, or change the existing one and break all the current consumers of that class. This is one of the problems that Inversion of Control (IoC) containers seek to solve.
Rather than calling the constructor when you need an instance of a class (and keeping track of all the collaborators you need to pass in to get a valid instance), you ask the container to resolve the dependencies for you. You get your instance, and if the constructor changes, the only class that needs to care is the container. This is probably an overly simplistic take on DI and IoC, that I think it gets the basic gist across.
One interesting contrast of Web API to MVC is the way controllers are instantiated. In MVC (at least in the slightly older versions I remember), a controller must have a parameterless constructor (at least with the default controller factory). In Web API, the dependency resolver functionality allows you to create constructors for Web API controllers with parameters. The parameters are then resolved by adapter class for the IoC container you chosen for your project (or roll your own implementation of IDependencyResolver, if you really want to...).
The documentation provides a complete tutorial, so I'm not going to paste all the code samples, but I did want to point out one thing that tripped me up for a second. The Web API controller looks like this:
public class ProductsController : ApiController { private IProductRepository _repository; public ProductsController(IProductRepository repository) { _repository = repository; } // Other controller methods not shown. }
There is a single constructor taking a IProductRepository instance. This dependency will need to be passed in when Web API creates a controller instance to serve a request. Web API will try to resolve the dependencies of parametered constructors, and fall back to a default constructor if those dependencies can't be resolved. Configuration is straightforward:
public static void Register(HttpConfiguration config) { var container = new UnityContainer(); container.RegisterType<IProductRepository, ProductRepository>(
new HierarchicalLifetimeManager()); config.DependencyResolver = new UnityResolver(container); // Other Web API configuration not shown. }
"UnityResolver" is a class implementing the IDependencyResolver interface that acts as a wrapper around a Unity container.
Implement Action and Exception filters
Action filters are covered in the Web API 2 using ActionFilterAttribute blog post, and exception filters are covered in the Exception Handling in ASP.NET Web API documentation.
The first bonus nugget from the Action Filters blog post was that it eventually led me to the HTTP MESSAGE LIFECYCLE poster. Not only does the poster illustrate where Action and Exception filters fit in the pipeline, but also Message Handlers and Model Binding. Definitely worth a look.
The first bonus nugget from the Action Filters blog post was that it eventually led me to the HTTP MESSAGE LIFECYCLE poster. Not only does the poster illustrate where Action and Exception filters fit in the pipeline, but also Message Handlers and Model Binding. Definitely worth a look.
ActionFilters are created by extending the ActionFilterAttribute class and overriding the methods OnActionExecuting (which fires before the action method) and OnActionExecuted (which fires just after the action method). Both methods take context objects as parameters; these context objects provide access to the request, response, model state, etc.
Once an action filter is defined, it can be used to decorate the action methods to which it should be applied. It can also be applied to the controller class, which will cause it to be executed for every action method. Filters defined at the class level can be selectively turned off on individual action methods by decorating those methods with the [OverrideActionFilters] attribute.
While action filters in Web API are conceptually similar to those in MVC, they are slightly different in that Web API action filters do not have hooks for an "action result" as seen in the MVC filters. The base class for Web API filters lives in System.Web.Http.Filters.
The following toy example is about the simplest action filter you could create:
public class LoggingActionFilterAttribute : ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { Debug.WriteLine("Before action..."); } public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext) { Debug.WriteLine("After action..."); } }
Applied to a controller action:
[LoggingActionFilter] [Route("api/Dogs/Gizmo")] public HttpResponseMessage GetGizmoDog() { // implementation omitted }
This will simply write out a log message any time this method is called (sidebar, notice I'm using the custom media type for our Dog entity...):
Exception filters can be used in a similar way. An exception filter needs to implement the IExceptionFilter interface, and perhaps the simplest way to do this is by subclassing the ExceptionFilterAttribute class and overriding the OnException method. Any uncaught exceptions not of type HttpResponseException are returned to the client as HTTP 500 status responses. Exception filters provide a mechanism for intercepting these uncaught exceptions and translating them into an HttpResponseException with a more appropriate HTTP status and/or message body.
Both action and exception filters can be registered in several ways. At the method level with an annotation, at the class level with an annotation (affecting all action methods on the class) or globally by adding to the HttpConfiguration.Filters collection.
Implement sync and async actions
I covered this in the post on Designing an API (in the segment on "choosing when to use async"). I included the blog post in the resources for this one that I followed: Comparing Sync and Async.
Implement streaming actions
The material for this section primarily derives from the following articles: Web API Thoughts 1 of 3 - Data Streaming [CodeProject], Asynchronous Video Live Streaming with ASP.NET Web APIs 2.0 [C#Corner], Streaming data using Web API [GuvBlog].
Two mechanisms for streaming content from the server to the client come built in: the StreamContent class and the PushStreamContent class. The CodeProject article uses StreamContent, C#Corner uses PushStreamContent, and GuvBlog covers both. Searching for these specific classes yields a plethora of additional tutorials and examples.
Before serving streaming content, the CodeProject tutorial makes the point of highlighting the IIS configuration that is necessary. If streaming files, make sure that IIS has the appropriate read/write permissions at the OS level. These steps done, serving the stream is really just a matter of creating some kind of stream representing the content, and using the StreamContent class essentially as a wrapper. Here is an example largely borrowed (more like straight up plagerized) from this StackOverflow question, basically I just made it a tiny bit more general:
This is really all there is to streaming content back to the client. Set the result.Content to a StreamContent with the content stream passed into the constructor, set the content type to octet stream, set content length and content disposition, and boom, that's it. The code in the Code Project tutorial that actually creates the response is pretty much exactly the same, just a stylistic choice to not use the initializers...
Where StreamContent pipes the content down the wire continuously, the PushStreamContent class gives you lower level control over how data is pushed into the stream. The constructor for PushStreamContent takes a lambda function that accepts three parameters of type Stream, HttpContent, and TransportContext. The stream is the output stream that will be written to by the lambda function. Each time the stream is flushed, the contents are serialized and sent to the client. This long running mechanism could be used in scenarios where the stream is not a fixed length, such as a stock ticker stream.
Inbound streams (that is, pushing content from the client to the API) is demoed in both the Code Project and the GuvBlog article. GuvBlog just calls Request.Content.ReadAsStreamAsync() to get a stream from the request. This stream is then copied to a file stream and copied to disk. The Code Project expects a request in multipart format. There is a property on the HttpRequest base class (HttpRequestBase.Files) that represents a collection of files. I had to go to the GitHub page with the demo code because he punted in the article regarding the implementation, but basically it is just iterating through these files and writing their content out to disk.
The HttpFileCollectionBase representing the multiple files threw me off at first. The iterator over the collection returns strings (file names), but when you treat he collection as a dictionary with this filename as a key, it returns an HttpPostedFileBase... and honestly I'm not sure how the code works at this point because WriteStream() isn't a method on HttpPostedFileBase [it's an extension method defined in another part of his example code base... ffs]. There is an "InputStream" property, but this stack overflow answer points out that using it loads up the entire stream into memory, which is probably not what you want if the upload is some gigantic file...
Two mechanisms for streaming content from the server to the client come built in: the StreamContent class and the PushStreamContent class. The CodeProject article uses StreamContent, C#Corner uses PushStreamContent, and GuvBlog covers both. Searching for these specific classes yields a plethora of additional tutorials and examples.
Before serving streaming content, the CodeProject tutorial makes the point of highlighting the IIS configuration that is necessary. If streaming files, make sure that IIS has the appropriate read/write permissions at the OS level. These steps done, serving the stream is really just a matter of creating some kind of stream representing the content, and using the StreamContent class essentially as a wrapper. Here is an example largely borrowed (more like straight up plagerized) from this StackOverflow question, basically I just made it a tiny bit more general:
public HttpResponseMessage GetStream() { Stream stream = //get your stream of choice var result = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StreamContent(stream) }; result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); result.Content.Headers.ContentLength = stream.Length; result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment") { FileName = "filename.ext", Size = stream.Length }; return result; }
This is really all there is to streaming content back to the client. Set the result.Content to a StreamContent with the content stream passed into the constructor, set the content type to octet stream, set content length and content disposition, and boom, that's it. The code in the Code Project tutorial that actually creates the response is pretty much exactly the same, just a stylistic choice to not use the initializers...
Where StreamContent pipes the content down the wire continuously, the PushStreamContent class gives you lower level control over how data is pushed into the stream. The constructor for PushStreamContent takes a lambda function that accepts three parameters of type Stream, HttpContent, and TransportContext. The stream is the output stream that will be written to by the lambda function. Each time the stream is flushed, the contents are serialized and sent to the client. This long running mechanism could be used in scenarios where the stream is not a fixed length, such as a stock ticker stream.
Inbound streams (that is, pushing content from the client to the API) is demoed in both the Code Project and the GuvBlog article. GuvBlog just calls Request.Content.ReadAsStreamAsync() to get a stream from the request. This stream is then copied to a file stream and copied to disk. The Code Project expects a request in multipart format. There is a property on the HttpRequest base class (HttpRequestBase.Files) that represents a collection of files. I had to go to the GitHub page with the demo code because he punted in the article regarding the implementation, but basically it is just iterating through these files and writing their content out to disk.
The HttpFileCollectionBase representing the multiple files threw me off at first. The iterator over the collection returns strings (file names), but when you treat he collection as a dictionary with this filename as a key, it returns an HttpPostedFileBase... and honestly I'm not sure how the code works at this point because WriteStream() isn't a method on HttpPostedFileBase [it's an extension method defined in another part of his example code base... ffs]. There is an "InputStream" property, but this stack overflow answer points out that using it loads up the entire stream into memory, which is probably not what you want if the upload is some gigantic file...
Implement SignalR
I cover this in my 70-486 post on Websockets, and create an interesting demo with BoxBlaster, so I'm not going to repeat all that here. SignalR is a big enough topic it gets it's own section in the documentation: SignalR.
Test Web API services
Lastly, I'm going to look at the Testing and Debugging ASP.NET Web API docs. For the most part, testing in Web API is not too difficult because the controllers are relatively well isolated already. One "gotcha" that the Unit Testing Controllers document points out is that when you test the controllers, if they return an HttpResponseMessage, you'll need to be sure you instantiate a request and configuration object and assign them to the controller instance, otherwise it'll throw exceptions:
As long as you're passing in your dependencies (the above example is passing in a test repository that implements the same repository interface), then testing should be pretty painless. I covered general testing stuff in the Test a web application, which leads further down into other rabbit holes if one is so inclined.
One special case of unit testing that I wanted to explore that is specific to Web API is the idea of testing routing. While the infrastructure for routing is part of the framework, and thus arguably not something we should worry about testing, for times when you have to wander out of "convention over configuration" land, it's probably not a bad idea to have some tests in place to ensure you didn't muck it up. Luckily, the Strathweb blog post titled Testing routes in ASP.NET Web API does all the hard work of exploring this question.
You have to do a bit of gymnastics with some helper methods in order to start testing routes, since it is one area that Web API is not really conducive to testing. His RouteTester class is all of about 50 lines of code, and the tests are fairly readable.
[TestMethod] public void GetReturnsProduct() { // Arrange var controller = new ProductsController(repository); controller.Request = new HttpRequestMessage(); controller.Configuration = new HttpConfiguration(); // Act var response = controller.Get(10); // Assert Product product; Assert.IsTrue(response.TryGetContentValue<Product>(out product)); Assert.AreEqual(10, product.Id); }
As long as you're passing in your dependencies (the above example is passing in a test repository that implements the same repository interface), then testing should be pretty painless. I covered general testing stuff in the Test a web application, which leads further down into other rabbit holes if one is so inclined.
One special case of unit testing that I wanted to explore that is specific to Web API is the idea of testing routing. While the infrastructure for routing is part of the framework, and thus arguably not something we should worry about testing, for times when you have to wander out of "convention over configuration" land, it's probably not a bad idea to have some tests in place to ensure you didn't muck it up. Luckily, the Strathweb blog post titled Testing routes in ASP.NET Web API does all the hard work of exploring this question.
You have to do a bit of gymnastics with some helper methods in order to start testing routes, since it is one area that Web API is not really conducive to testing. His RouteTester class is all of about 50 lines of code, and the tests are fairly readable.
No comments:
Post a Comment