Thursday, September 1, 2016

Microsoft 70-487: Query and manipulate data by using ADO.NET

Exam Objectives

Query and manipulate data by using Connection, DataReader, Command, DataAdapter, DataSet; perform synchronous and asynchronous operations; manage transactions (API)


Quick Overview of Training Materials

Pluralsight:
ADO.NET Fundamentals, ADO.NET by Example

Code Samples:
My ADO.NET Examples on GitHub

Note that the QuestPond video (connection pooling) is basically the exact same video as the section from the ADO.NET Fundamentals Pluralsight course, and the Understanding Disconnected Model video is by the author of the ADO.NET by Example Pluralsight course and covers part of the same material.  I'm thinking you can find all of the material from both of these courses on YouTube if you look...

ADO.NET Architecture


ADO.NET evolved from a series of preceding technologies that all aimed to make the experience of connecting an application to a database easier and more loosely coupled.  Way back in the day, each database had it's own native driver, with a unique api.  Changing the database system meant substantial code rewrites against different apis.  Then technologies like DAO, RDO, ODBC, OleDB, and ADO incrementally added layers of abstraction to separate the code from the database.  ADO.NET was the evolution of this technology for the .NET platform, and the other technologies for data access in .NET (EF and WCF Data Service) are built on top of it.

At the core of ADO.NET is the managed provider.  The managed provider includes several classes that provide for communicating with a database and executing queries and commands.  Included in the .NET framework are four providers:  SqlServer, Oracle, OleDb, and ODBC.  The SqlServer and Oracle providers use native drivers for those respective systems, whereas OleDb and ODBC are more generic and can use any dbms that has an OleDb or ODBC driver available.  For example, MS Office products like Access and Excel can use the JET OleDb driver, which will allow ADO.NET to communicate with a local Access database or Excel spreadsheet.

The managed provider includes four implementation specific classes:

  • Connection - encapsulates communication with the data source.  An important concept to keep in mind when working with connections is connection pooling.  With connection pooling, rather than creating a brand new connection every time connection.open() is called, and garbage collecting the connection when it is closed, connections are "cached" and reused.  This saves on the work involved in creating and destroying connections.  However, there are some "gotchas" that will undermine this capability, such as using different connection strings (e.g. username and password vary) or not calling connection.close().
  • Command - executes a SQL command against an open connection object.  This can be text (in-line) SQL, or a stored procedure. The Command object and the Connection object are used directly in the app in the Connected model of data access.
    • ExecuteNonQuery() - executes SQL and returns number of rows affected.  Commonly used for Insert, Update, and Delete queries.  
    • ExecuteScalar() - executes SQL and returns a single value (first row, first column). Might use for a Count, or an Insert that returns a new Id.
    • ExecuteReader() - executes SQL and returns DataReader to allow navigating the data. Use for Select queries.
  • DataReader - read-only, forward-only cursor used for iterating through the results of a query.
  • DataAdapter - a collection of commands (select, delete, insert, update) that is used to act as an intermediary between the database and the application, through a DataSet.  The "Fill" and "Update" methods populate the DataSet and push changes back to the database, respectively.  DataAdapters and DataSets are used in the Disconnected model of data access.
There are two models for interacting with the database through the DataProvider.  In the "Connected" model, the interactions with the database are done directly against an open Connection object using the Command object.  The Command object may return some data directly (int of affected rows or a scalar), though for Selects the data will be returned with a DataReader.  When the operation is finished, the connection is closed.  In the "Disconnected" model, the connection is not open and closed explicitly, but instead the application uses a DataAdapter to fill a DataSet.  Data is read from this dataset, and any changes made to this DataSet are not reflected in the database until the Update() method is called, which causes the DataAdapter to write changes in the DataSet back to the database using the Insert, Update, and Delete commands.

A DataSet is an in-memory representation of the result of a database query.  A DataSet can be typed or untyped, the difference being that a "typed" DataSet includes XML schema information.

  • XmlSchema - typed DataSets are defined using an XML schema
  • DataTables - represents a table from the database.  When using Multiple Active Result Sets (MARS), there will be multiple DataTables in a given DataSet.  The main components of a DataTable are:
    • DataRows
    • DataColumns
    • Constraints
  • DataView - filtered and/or sorted view of a DataTable.  

The diagram below sums it all up.  Red paths represent interactions in a connected model, while blue represent interactions in a disconnected model.  Because the connection plays a role in both, the path to the database is purple.





Reading Data


In keeping with the tradition of basically every ADO.NET demo I've found online, all of the following examples will use a WinForms app with some simple grid views to display DataSets.  I guess if I wanted to distinguish myself I'd use WPF, but it's way easier if I just follow what I find...

The following code snippets are called in the form constructor, which pulls the connection string from the App.config file and passes it into a connection object:

public Form1()
{
    InitializeComponent();
 
    var connectionString = ConfigurationManager.ConnectionStrings["Default"].ConnectionString;
    var conn = new SqlConnection(connectionString);
 
    //DisconnectedModelLoad(conn);
    ConnectedModelLoad(conn);
}

The form is nothing but a standard WinForm template application with a DataGridView control docked in the window.  Both code snippets result in the following:



Reading data with DataReader


private void ConnectedModelLoad(SqlConnection conn){
    //Connected Model
    var cmd = new SqlCommand();
    cmd.Connection = conn;
    cmd.CommandType = CommandType.Text;
    cmd.CommandText = "SELECT * FROM HumanResources.Employee";
    if (conn.State == ConnectionState.Closed)
        conn.Open();
    var dr = cmd.ExecuteReader();
    var employees = new DataTable();
    employees.Load(dr);
    dataGridViewEmployee.DataSource = employees;
    conn.Close();
}

First, we create a Command object, this one a SqlCommand.  We set the connection, command type, and the inline sql to use in the query.  We check if the connection is closed, and open it if necessary.  We then get a DataReader by calling ExecuteReader().  Since the DataReader can't be bound to the control directly, we create a DataTable and pass the reader to the Load() method.  We can then bind this DataTable to the control.


Reading data with DataAdapter


private void DisconnectedModelLoad(SqlConnection conn){
 
    //Disconnected Model
    var da = new SqlDataAdapter("SELECT * FROM HumanResources.Employee", conn);
    var cmb = new SqlCommandBuilder(da);
    DataSet employees = new DataSet();
    da.Fill(employees, "HumanResources.Employee");
    dataGridViewEmployee.DataSource = employees.Tables[0];
}

This code looks much different than the connected model.  Here, we pass the Select query and the connection to a DataAdapter.  We then use a CommandBuilder object to create the Delete, Insert, and Update commands (though we aren't using them here, it was part of the demo I found...).  We then create a DataSet, and use the DataAdapter to "fill" it with the data from the database.  We can then bind the first DataTable in the set to the grid view control.  Notice that nowhere are we explicitly opening and closing the connection.

The above examples use DataGridView binding to hide most of the underlying details about how the data is accessed.  To demonstrate how individual fields are accessed, as well as demonstrate the difference between typed and untyped DataSets, I'll create another simple form, this one with a search box and four fields: first name, last name, marital status, and gender.  In the AdventureWorks database, these are split between two tables: Person and Employee.  This will also look at how to create a relation between these two tables.


Typed Dataset


To create a typed DataSet, we add a DataSet item like we would add a class file (or any other file), and drag tables from Server Explorer onto the designer.  Once we drag Employee and Person onto the canvas it will look like this (notice the relationship is automatically mapped):


Once we have this typed DataSet, working with the data is much most object oriented.  Rather than using numerical indexes or strings for column names, tables and columns are represented as properties on the DataSet object.  Filling the typed DataSet is done much the same as an untyped DataSet.  I did have to add some TableMappings to mine to ensure the two DataTables were filled correctly:

AdventureWorksDS ds = new AdventureWorksDS();
var connectionString = ConfigurationManager.
    ConnectionStrings["Default"].ConnectionString;
using (var conn = new SqlConnection(connectionString))
{
    var da = new SqlDataAdapter(
        @"SELECT * FROM HumanResources.Employee; 
                      SELECT * FROM Person.Person", conn);
    da.TableMappings.Add("Table""Employee");
    da.TableMappings.Add("Table1""Person");
    da.Fill(ds);
}

Constraints and relations are already defined by the typed DataSet, so when we go to access the data we can navigate these relations:

var id = Int32.Parse(textboxId.Text);
var employee = ds.Employee.FindByBusinessEntityID(id);
var person = employee.PersonRow;
 
textboxFirstName.Text = person.FirstName;
textBoxLastName.Text = person.LastName;
textBoxMaritalStatus.Text = employee.MaritalStatus;
textBoxGender.Text = employee.Gender;


Untyped DataSet


An untyped DataSet does not use an XML schema to define the structure of the data.  Instead, the tables and their columns are represented as dictionaries, with numerical indexes and string names for the tables and columns.  While this is arguably more flexible (as it does not require creating the XML schema), it is more prone to error since everything is based on magic strings.  Filling the DataSet is nearly identical, however we will have to create the data relationships and constraints manually:

var da = new SqlDataAdapter(
    @"SELECT * FROM HumanResources.Employee; 
                      SELECT * FROM Person.Person", conn);
da.TableMappings.Add("Table""Employee");
da.TableMappings.Add("Table1""Person");
da.Fill(ds2);
 
//Add constraints and relations to untyped dataset
ds2.Tables["Employee"].Constraints.Add("Emp_PK",
    ds2.Tables["Employee"].Columns[0], true);
ds2.Tables["Person"].Constraints.Add("Person_PK",
    ds2.Tables["Person"].Columns[0], true);
string[] columns = { "BusinessEntityId" };
var relation = new DataRelation("FK_Person_Employee",
    ds2.Tables["Person"].Columns[0],
    ds2.Tables["Employee"].Columns[0]);
ds2.Relations.Add(relation);

We can use this relationship object to navigate from an Employee to their respective Person, though it isn't as intuitive as the typed model.  Values in the columns are of type "object", so they will need to be cast to the appropriate type in order to be used (the columns in the typed DataSet are also strongly typed, so this wasn't necessary):

var id = Int32.Parse(textboxId.Text);
var employee = ds2.Tables["Employee"].Rows.Find(id);
var person = employee.GetParentRow(ds2.Relations[0]); 
 
textboxFirstName.Text = person["FirstName"].ToString();
textBoxLastName.Text = person["LastName"].ToString();
textBoxMaritalStatus.Text = employee["MaritalStatus"].ToString();
textBoxGender.Text = employee["Gender"].ToString();

Ultimately, both methods yield the same results:


It may seem like I'm partial to the typed DataSet.  I do like the more elegant, object-oriented code that results from using it, but I also realize that it could potentially create more maintenance because of the extra layer.  Although, if you are manually defining the constraints and relations anyway, it probably makes more sense to just use a typed DataSet.  Like many things, the appropriate choice is going to depend on the situation (I know, sooooo helpful...).


Inserting, Updating, Deleting


As with reading data, changing data is done differently depending on whether your code follows the connected or disconnected model.  In the connected model, a DbCommand object is used to execute queries directly against the database.  In the disconnected model, a DataAdapter is still used to indirectly interact with the database through a DataSet.


Change data with DbCommand


For insert operations, it's useful if we get back the Id of the newly created record.  Using ExecuteScalar on the DbCommand object is one way to accomplish this.  Following is a code snippet that is much longer and more complicated than I'd initially hoped for.  How hard could inserting a person and employee be, right?  Well it turns out that the Person and Employee tables both have a FK relationship with BusinessEntity, and both have a sigfinicant number of non-nullable (or otherwise constrained) fields.  So this mess is what I ended up with:

using (var conn = new SqlConnection(connectionString))
{
    var firstName = textboxFirstName.Text;
    var lastName = textBoxLastName.Text;
    var maritalStatus = textBoxMaritalStatus.Text;
    var gender = textBoxGender.Text;
 
 
    var cmd = new SqlCommand();
    cmd.Connection = conn;
    cmd.CommandType = CommandType.Text;
    cmd.CommandText =
        @"  INSERT INTO Person.BusinessEntity (rowguid, ModifiedDate)
            VALUES (NEWID(), GETDATE());
 
            DECLARE @id int;
            SET @id = SCOPE_IDENTITY();
 
            INSERT INTO Person.Person (BusinessEntityId, FirstName, LastName, 
                PersonType, NameStyle, EmailPromotion, rowguid, ModifiedDate)
            VALUES (@id, @fname, @lname, 'EM', 0, 0, NEWID(), GETDATE());
 
            INSERT INTO HumanResources.Employee (BusinessEntityId, MaritalStatus, Gender,
                NationalIdNumber, 
                LoginID, JobTitle, BirthDate, HireDate, SalariedFlag,
                VacationHours, SickLeaveHours, CurrentFlag, rowguid, ModifiedDate)
            VALUES (@id, @mstat, @gender, 
                SUBSTRING(CONVERT(NVARCHAR(50), NEWID()),1, 15), 
                @fname + @lname + '@example.com', '', '1982-01-01', GETDATE(), 0, 
                0, 0, 1, NEWID(), GETDATE());
 
            SELECT @id";
    cmd.Parameters.AddWithValue("@fname", firstName);
    cmd.Parameters.AddWithValue("@lname", lastName);
    cmd.Parameters.AddWithValue("@mstat", maritalStatus);
    cmd.Parameters.AddWithValue("@gender", gender);
    if (conn.State == ConnectionState.Closed)
        conn.Open();
    var id = cmd.ExecuteScalar();
    textboxId.Text = id.ToString();
    conn.Close();
}

Normally you'd be better served making an operation like this into a stored procedure (or several of them...), but for demonstration purposes, this will do well enough.  You'd also do well to have the operations performed within a transaction, to ensure that if one portion of the insert fails, you don't end up with the database in an inconsistent state.

Update is much easier, since it only has to touch the fields that I care about:

using (var conn = new SqlConnection(connectionString))
{
    var id = Int32.Parse(textboxId.Text);
    var firstName = textboxFirstName.Text;
    var lastName = textBoxLastName.Text;
    var maritalStatus = textBoxMaritalStatus.Text;
    var gender = textBoxGender.Text;
 
 
    var cmd = new SqlCommand();
    cmd.Connection = conn;
    cmd.CommandType = CommandType.Text;
    cmd.CommandText =
        @"  UPDATE Person.Person 
            SET FirstName = @fname, 
                 LastName = @lname
            WHERE BusinessEntityId = @id;
 
            UPDATE HumanResources.Employee
            SET MaritalStatus = @mstat, 
                Gender = @gender
            WHERE BusinessEntityId = @id;";
 
    cmd.Parameters.AddWithValue("@id", id);
    cmd.Parameters.AddWithValue("@fname", firstName);
    cmd.Parameters.AddWithValue("@lname", lastName);
    cmd.Parameters.AddWithValue("@mstat", maritalStatus);
    cmd.Parameters.AddWithValue("@gender", gender);
    if (conn.State == ConnectionState.Closed)
        conn.Open();
    cmd.ExecuteNonQuery();
    conn.Close();
}

Still a goodly chunk of code for what would be a couple of lines in EF (but I get ahead of myself...).  Finally, Delete is the simplest of all (in theory), though in this case, the following code fails because the AdventureWorks database doesn't support Delete... instead records must be marked as "inactive" with a flag set in one of the columns:

using (var conn = new SqlConnection(connectionString))
{
    var id = Int32.Parse(textboxId.Text);
    var firstName = textboxFirstName.Text;
    var lastName = textBoxLastName.Text;
    var maritalStatus = textBoxMaritalStatus.Text;
    var gender = textBoxGender.Text;
 
 
    var cmd = new SqlCommand();
    cmd.Connection = conn;
    cmd.CommandType = CommandType.Text;
    cmd.CommandText =
        @"  DELETE FROM Person.Person 
            WHERE BusinessEntityId = @id;
 
            DELETE FROM HumanResources.Employee
            WHERE BusinessEntityId = @id;";
 
    cmd.Parameters.AddWithValue("@id", id);
    if (conn.State == ConnectionState.Closed)
        conn.Open();
    cmd.ExecuteNonQuery();
    conn.Close();
}

One word to the wise: Don't mix the connected and disconnected model.  Use one or the other.  I executed the above queries on my demo app that was using a dataset, and to reflect the changes immediately I had to completely reload the DataSet each time the database changed.  It was suuuuper slow and the code was dumb.  Don't do it.  If you are doing inserts, updates, and deletes this way, do your selects with a DataReader.


Change data with DataAdapter


I'm beginning to understand why I'm increasingly getting the impression that the disconnected model has fallen out of vogue with developers using ADO.NET.  I spent several hours just trying to get it to insert a new record, which is supposed to be as simple as adding an entry to the "Rows" collection on the DataTable.  Eventually I gave up on the Person/Employee business and created a new DataSet based on the Department table, which has no relationships.  The app is pretty much identical to the one for employee, and the data manipulation bit is much simpler:


With this much simplified scenario, everything worked as advertised.  The Create, Update, and Delete functionality all did what I expected.  One thing that I discovered is that when you manually create a new DataRow, you don't get the id back.  I ended up having to refill the DataSet and iterating through all the records looking for the one I just added.  The navigation widget in WinForms seems to do this automatically.  Since I'm interested in the ADO.NET piece and not so much the magical things that WinForms can do for us (catch the sarcasm?), I did it all manually as best I could.  The following three methods handle the click events for their obvious counterpart buttons on the UI:

private void Create_Click(object sender, EventArgs e)
        {
            using (var conn = new SqlConnection(connectionString))
            {
                initDataAdapter(conn);
 
                var dname = nameTextBox.Text;
                var dgroup = groupNameTextBox.Text;
 
                var newDept = ds.Department.NewDepartmentRow();
                newDept.Name = dname;
                newDept.GroupName = dgroup;
                var mdate = DateTime.Now;
                newDept.ModifiedDate = mdate;
 
                ds.Department.Rows.Add(newDept);
                da.Update(ds);
 
                
                ds.Tables[0].Rows.Clear();
                da.Fill(ds);
                short newid = -1;
                for (var i = 0; i < ds.Tables[0].Rows.Count; i++)
                { 
                    var row = (AW_Dept.DepartmentRow)ds.Tables[0].Rows[i];
                    if (row.Name == dname &&
                       row.GroupName == dgroup &&
                       row.ModifiedDate.ToString() == mdate.ToString()) 
                    {
                        newid = row.DepartmentID;
                    }
                }
                departmentIDTextBox.Text = newid.ToString();
            }
        }
 
        private void Update_Click(object sender, EventArgs e)
        {
            using (var conn = new SqlConnection(connectionString))
            {
                initDataAdapter(conn);
 
                var id = Int32.Parse(departmentIDTextBox.Text);
                var dept = ds.Department.FindByDepartmentID((short)id);
                dept.Name = nameTextBox.Text;
                dept.GroupName = groupNameTextBox.Text;
                dept.ModifiedDate = DateTime.Now;
                
                da.Update(ds);
            }
        }
 
        private void Delete_Click(object sender, EventArgs e)
        {
            using (var conn = new SqlConnection(connectionString))
            {
                initDataAdapter(conn);
 
                var id = Int32.Parse(departmentIDTextBox.Text);
                var dept = ds.Department.FindByDepartmentID((short)id);
                dept.Delete();
 
                da.Update(ds);
 
                departmentIDTextBox.Text = "";
                nameTextBox.Text = "";
                groupNameTextBox.Text = "";
                modifiedDateDateTimePicker.Value = DateTime.Now;
            }
        }

I didn't do the equivalent code with the untyped DataSet again because it's basically the same thing, just using the dictionary notation instead of the strong property notation.

Regarding the problems I had with the Employee-Person-BusinessEntity DataSet, one of the problems was due to the way I was initializing the DataAdapter. While it is possible to include multiple select statements in the SQL string that goes in to the constructor, and this is totally fine for filling the DataSet, it screws with SqlCommandBuilder, which creates the Update, Insert, and Delete scripts.  I had everything coded right in the event handler, but the InsertCommand on the DataAdapter wasn't working.  When I reinitialized the DataAdapter with just the table of interest in the select statement and build the commands, it worked as expected (at so far as actually inserting a record and not trying to use a different table...).  Still haven't worked out what is going on with the relations, so I still get FK constraint violation exceptions... sigh.



Asynchronous Operations


So far everything I've demonstrated has been synchronous.  The DbCommand methods ExecuteNonQuery, ExecuteScalar, and ExecuteReader all have Async counterparts that return a Task. I changed the Department example's Update functionality to use these methods (I also used the Load functionality on the DataTable to make refreshing the data simpler after the update is finished...);

private async Task<int> Update_Click_Async()
 {
     
     using (var conn = new SqlConnection(connectionString))
     {
         conn.Open();
 
         var id = (shortInt32.Parse(departmentIDTextBox.Text);
         var name = nameTextBox.Text;
         var groupName = groupNameTextBox.Text;
         var modifiedDate = DateTime.Now;
         SqlCommand cmd = new SqlCommand(
                @"UPDATE HumanResources.Department 
                         SET Name=@name, GroupName=@gname, ModifiedDate=@mdate
                         WHERE DepartmentId = @id", conn);
         cmd.Parameters.AddWithValue("@id", id);
         cmd.Parameters.AddWithValue("@name", name);
         cmd.Parameters.AddWithValue("@gname", groupName);
         cmd.Parameters.AddWithValue("@mdate", modifiedDate);
 
         Task<int> x = cmd.ExecuteNonQueryAsync();
 
         Console.Out.Write("Waiting on update");
         for (var i = 0!x.IsCompleted && i < 1000; i++)
         {
             Thread.Sleep(1);
             Console.Out.Write(".");
         }
         await x;
 
         SqlCommand select = new SqlCommand(
             @"SELECT * FROM HumanResources.Department;", conn);
         var dr = select.ExecuteReaderAsync();
         Console.Out.Write("Waiting on data reader");
         for (var i = 0!dr.IsCompleted && i < 1000; i++)
         {
             Thread.Sleep(1);
             Console.Out.Write(".");
         }
         ds.Tables[0].Load(dr.Result);
 
         return x.Result;
     }
 }

One alternative approach pointed out in the ExamRef is the use of the BeginExecute_ and EndExecute_ methods.  There is almost no different in syntax and usage, so the choice between this approach and the Execute_Async methods is probably mostly a matter of taste.  This style doesn't use await but it does allow for using a callback function, which could be useful in some situations:

//Task<int> x = cmd.ExecuteNonQueryAsync();
AsyncCallback callback = ((result) =>
{
    Console.Out.WriteLine("Update Finished...");
});
var x = cmd.BeginExecuteNonQuery(callback, null);
 
Console.Out.Write("Waiting on update");
for (var i = 0!x.IsCompleted && i < 1000; i++)
{
    Thread.Sleep(1);
    Console.Out.Write(".");
}
//await x; //no "await" with this approach
 
/* DataReader stuff excluded for brevity */
 
return cmd.EndExecuteNonQuery(x);

The console output shows that the execution did continue after calling BeginExecuteNonQuery, the callback was called, and then the DataReader was retrieved much the same way:





Transactions API


A transaction is an atomic unit of work, which usually contains multiple operations (little point if there isn't...), that will all succeed or fail together.  Like most things in ADO.NET, there are multiple ways to use transactions.  One way to use transactions is with the Transaction class.  This is a DataProvider specific class that implements transactions for the given provider.  So a Sql Server DataProvider would use a SqlTransaction.  Using the Transaction object is pretty straightforward:

  • Instantiate a transaction of the appropriate type, using the Connection object.
  • Attach the transaction to the DbCommands that will be included.
  • Execute the commands.  
  • Commit the transaction.
  • Catch any exceptions, and call RollBack to cancel the database operations.
Here is what that code might look like for an Update operation:


using (var conn = new SqlConnection(connectionString))
{
    conn.Open();
    SqlTransaction trans = conn.BeginTransaction("SampleTransaction");
 
    var id = (shortInt32.Parse(departmentIDTextBox.Text);
    var name = nameTextBox.Text;
    var groupName = groupNameTextBox.Text;
    var modifiedDate = DateTime.Now;
    SqlCommand cmd = new SqlCommand(
           @"UPDATE HumanResources.Department 
                         SET Name=@name, GroupName=@gname, ModifiedDate=@mdate
                         WHERE DepartmentId = @id", conn);
    cmd.Parameters.AddWithValue("@id", id);
    cmd.Parameters.AddWithValue("@name", name);
    cmd.Parameters.AddWithValue("@gname", groupName);
    cmd.Parameters.AddWithValue("@mdate", modifiedDate);
    cmd.Transaction = trans;
 
    try
    {
        var x = cmd.ExecuteNonQuery();
 
        trans.Commit();
 
        SqlCommand select = new SqlCommand(
            @"SELECT * FROM HumanResources.Department;", conn);
        //select.Transaction = trans;
        var dr = select.ExecuteReader();
 
        ds.Tables[0].Load(dr);
        return x;
    }
    catch (Exception ex)
    {
        Console.WriteLine("Commit Exception Type: {0}", ex.GetType());
        Console.WriteLine("  Message: {0}", ex.Message);
 
        // Attempt to roll back the transaction.
        try
        {
            trans.Rollback();
        }
        catch (Exception ex2)
        {
            // This catch block will handle any errors that may have occurred
            // on the server that would cause the rollback to fail, such as
            // a closed connection.
            Console.WriteLine("Rollback Exception Type: {0}", ex2.GetType());
            Console.WriteLine("  Message: {0}", ex2.Message);
        }
        return 0;
    }
}

Note that any Command being executed while a Transaction is opened will need to be included in that transaction.  Above, if the trans.Commit() call is moved to below the select command, then the line of code adding the Select to the transaction will need to be uncommented.  Of course, this actually made it behave weird, which is why I moved the call to Commit() up.  This type of transaction can also be used on a DataSet by attaching the transaction object to the appropriate Command on the DataAdapter instance.  

One "gotcha", though, is if you are using CommandBuilder to generate the Insert, Update, and Delete commands.  With CommandBuilder, these properties are all null on the DataAdapter instance.  You have to attach the transaction to the commands through the CommandBuilder itself, as described in this article.  Here is what it looks like in my "Delete" button handler:

using (var conn = new SqlConnection(connectionString))
{
    initDataAdapter(conn);
    conn.Open();
    cmb.GetDeleteCommand(); //generates the Delete command
 
    var trans = conn.BeginTransaction("TypedTransaction");
 
    var id = Int32.Parse(departmentIDTextBox.Text);
    var dept = ds.Department.FindByDepartmentID((short)id);
    dept.Delete();
 
    cmb.GetDeleteCommand().Transaction = trans;
    da.Update(ds);
    trans.Commit();
 
    departmentIDTextBox.Text = "";
    nameTextBox.Text = "";
    groupNameTextBox.Text = "";
    modifiedDateDateTimePicker.Value = DateTime.Now;
}

Another method for implementing transactions is through the use of TransactionScope.  TransactionScope is part of System.Transactions, so be sure to add the reference to the project.  The TransactionScope is usually instantiated inside a using statement (probably doesn't have to be, but every example I found did it that way...).  The scope does not need to be explicitly added to any database operations.  When all the relevant operations have been performed, the Complete method on the scope is called to commit the changes.  If something fails, the changes are rolled back (like anything that has a chance of throwing exceptions, this should all be done in try-catch blocks).  Here is what my Create method looks like using a TransactionScope:

try
{
    using (TransactionScope scope = new TransactionScope())
    {
        using (var conn = new SqlConnection(connectionString))
        {
            initDataAdapter(conn);
 
            var dname = nameTextBox.Text;
            var dgroup = groupNameTextBox.Text;
 
            var newDept = ds.Department.NewDepartmentRow();
            newDept.Name = dname;
            newDept.GroupName = dgroup;
            var mdate = DateTime.Now;
            newDept.ModifiedDate = mdate;
 
            ds.Department.Rows.Add(newDept);
            da.Update(ds);
 
 
            ds.Tables[0].Rows.Clear();
            da.Fill(ds);
 
            //If an exception is thrown before scope.Complete() is called, 
            //any database changes are rolled back.
            //throw new Exception("ERMAGERD AN EXCERPTION!!!");
 
            scope.Complete();
 
            //the rest does not need to the connection or transaction to work...
            short newid = -1;
            for (var i = 0; i < ds.Tables[0].Rows.Count; i++)
            {
                var row = (AW_Dept.DepartmentRow)ds.Tables[0].Rows[i];
                if (row.Name == dname &&
                   row.GroupName == dgroup &&
                   row.ModifiedDate.ToString() == mdate.ToString())
                {
                    newid = row.DepartmentID;
                }
            }
            departmentIDTextBox.Text = newid.ToString();
            
        }
    }
}
catch (Exception ex) 
{
    Console.Out.WriteLine(ex);
}

When the exception is uncommented, no changes will be commited to the database.

That's all I have for this topic.  Overall I thought the exam ref was an ok resource for this topic, though the authors didn't give very broad coverage of some of this stuff, and really handwaved the transaction section.

No comments:

Post a Comment