How to get the EDMX metadata from a Code First model (and why you need it)

Entity Framework provides a very good experience with its Code First development model. In it you can define classes and use them as POCO entities (Plain Old CLR Objects). For example, consider the following model:

     public class Blog

    {

        public int BlogId { get; set; }

        public string Name { get; set; }

        public virtual ICollection<Post> Posts { get; set; }

    }

 

    public class Post

    {

        public int PostId { get; set; }

        [MaxLength(200)]

        public string Title { get; set; }

        public int LikeCount { get; set; }

        public int BlogId { get; set; }

        public Blog Blog { get; set; }

    }

 

    public class BlogContext : DbContext

    {

        public DbSet<Blog> Blogs { get; set; }

        public DbSet<Post> Posts { get; set; }

    }

 In terms of usability it’s a great way to go since the code is self-explanatory and can be highly customized via the Fluent API. Now, you might be asking yourself: what happened with the EDMX file? Well, you don’t really see it but it still exists.

The way Entity Framework Code First works is by analyzing your assemblies for all the types related to your model and enumerating into a set of “discovered types”. Once EF knows about these types, it explores them looking for special attributes such as [Key], [MaxLength] and [Index] to know more about the model. Finally, it applies a set of conventions that allow it to discover information that’s not explicitly set. One such convention is the discovery of keys by which field PostId is discovered as the key to the Post entity. There’s a number of these conventions and it’s important that you understand them before using Code First.

The resulting fully-loaded model description is stored in memory using the EDM format. Effectively, Code First reverse-engineers an EDMX out of the POCOs, attributes and fluent API calls. From here on, Entity Framework doesn’t care whether you created the model Code First or Model First, it behaves the same. It goes on and generates the views, validates the model and sets up all the metadata to be ready to serve its purpose.

As you might imagine the process of reverse engineering the EDMX out of the Code First model is costly. It’s only paid for once on the startup of your first context, but it may represent a performance annoyance for your application. There’s a way to speed your start up though: use an EDMX in your project.

For this, you’ll want to obtain the EDMX data from your Code First context by calling:

EdmxWriter.WriteEdmx(context, new XmlTextWriter(new StreamWriter(MyModel.edmx)));

Call it in your context instance and you’ll get an EDMX file with all the data you require. Add the EDMX to your project and modify your connection string accordingly to make use of the EDMX data.

Large models will benefit the most. I’ve noticed reductions in the hundreds of milliseconds just for the EDMX generation part of the code.

Top 10 Microsoft Developer Links for Wednesday, July 30, 2014

  1. Henrik Frystyk Nielsen: Azure Mobile Services .NET Updates
  2. Rockford Lhotka: Third Party Dev Tools Strike Back
  3. Darren Hobbs: When Your Only Hammer is a Keyboard, Everything Looks Like a Tool
  4. CodeProject: Code For Maintainability So The Next Developer Doesn’t Hate You
  5. David Baxter Browne: How to connect to Oracle from a .NET Application
  6. This Week On Channel 9: New Unified Tech Event, CH9 WP8 & 360 App’s, Node.js for Visual Studio and more
  7. Microsoft Azure Cloud Cover Show: Azure Tooling in Visual Studio with Boris Scholl and Catherine Wang
  8. Developer.com: Top 10 Tips for C# Programmers
  9. eWeek: Why DevOps Is Becoming a Pivotal Factor in New Data Centers
  10. Pavel Yosifovich: Introduction to universal Windows apps in Windows 8.1 and Windows Phone 8.1

2097

????????????? Imagine Cup 2014 – MSP Summit Day 1

こんにちは、Microsoft Student Partners (MSP) 松原です!

シアトルで行われているImagine Cup 2014 と同時にMSP Summit に参加しています。

 

初日には マイクロソフトのStory Teller であるSteve Claytonによるデモセッションがありました。
Imagine Cup 参加者とMSP Summit 参加者が出席しました。

そこで学んだ内容をご紹介します!

【良いデモを行う10のコツ】Demo Session

 

#1 Tell a story
- どのような内容を「持ち帰ってほしいか」、どのメッセージを伝えたいのかはっきりとさせる

#2 Have a backup plan
 デモが失敗したときのバックアップを準備する。
 別のデモや機材を整える。

#3 Bigger is better
 見せるものは大きく見せる!

#4 Rehearse – timing
 時間に合わせた準備を。
 時間枠を意識し、デモの時間を調整する。話すスピードを変えるだけではダメ。

#5 Rehearse- words
 言葉は一度紙に書き、練習する。
 適切な表現が適切な場所でできるように行う。

#6 Be wary of dead moments
 デモが失敗することもある。だが、最長1分間戦い、うまく行かなければ次のものに進む

#7 Have a great opening
 出だしが肝心!
 人を発表に惹きつけ、関心を持たせる。これをするためにも客層を理解しておくことが大切。

#8 Have a great close
 発表の終わりも大切���!
 人間は3つのことしか同時に覚えられないことから、最後にデモで伝えたかったことを簡潔に3点でまとめる。

#9 Have fun
 発表者が楽しんでデモしていると、観客も楽しめる。
 また、楽しく愉快にプレゼンを行い観客を味方につけることが大事。味方についてくれていると、デモが上手くいかなくても観客は応援してくれる!

#10 If in doubt…
 1~9の準備を整えてもまだ不安であれば・・・

https://twitter.com/stevecla/status/489449883313512448

 オレンジ色の靴を履く!!
 デモが上手くいかなくても話題になるため。

 

今後もMSP Summitについて配信します。楽しみにしていてください!

How to use CDO 1.2.1 to connect to Exchange 2013 using a dynamic profile

NOTE: This article only applies to Exchange’s MAPI CDO download.  It doesn’t apply to using CDO 1.2.1 with an Outlook 2007 client.

I was discussing an issue recently with a customer and I asked him to connect to the Exchange server using CDO 1.2.1.  Then I realized that I had never tried that myself.  To that end, I decided to set out to have CDO 1.2.1 create a dynamic profile and connect to Exchange 2013.

First, some things about dynamic profiles. CDO 1.2.1 has a concept of a dynamic profile.  This means that a profile is created on the fly by passing the server name and mailbox name as the last parameter to the Session::Logon() method.

This is different than using a static profile that you configured outside of CDO 1.2.1.

Session::Logon()

http://msdn.microsoft.com/en-us/library/ms526377(v=exchg.10).aspx

One gotcha that I ran into was that the server name and mailbox name need to be delimited by a line feed (character 10).  In VIsual Basic 6 the line would look like this:

objSession.Logon , , ,true, ,true, _ 
"e9b5d6f1-89f1-4e02-93a1-7b3762cf2c59@contoso.com" & Chr(10) & "admin"    

Of course, in Exchange 2013 the server name is the personalized server name of the target mailbox.  The mailbox name is just the alias of the user.  That’s the easy part.  The hard part is configuring the registry to make this all work.  The RPCHttpProxyMap registry value is needed to get the dynamic profile created.  I discuss configuring this value in my omniprof article.  The other registry value that needs to be in place is the one which instructs CDO 1.2.1 to proceed even if Public Folders don’t exist in the organization.  This setting is discussed in this blog post article by a former member of my team.  Once those are in place it should work.

The reason why these values are needed is that CDO 1.2.1 needs to know how to properly connect to Exchange.  Telling CDO 1.2.1 to “Ignore No PF” instructs it to pass the CONNECT_IGNORE_NO_PF flag when creating the underlying dynamic profile.  Creating the RPCHttpProxyMap registry value tells the underlying MAPI subsystem what RPC Proxy Server to connect to, what authentication to use, and what to do if a non-trusted certificate is encountered.

The two scenarios that I couldn’t get working are targeting an Office 365 mailbox or an On-Premises mailbox where the RPC Proxy Server has been configured to accept Basic Authentication.  This is because the username and password must be configured on the profile for Exchange’s MAPI to use it. You’ll need to use a static profile for those scenarios.

Lastly, I wanted to point out that CDO 1.2.1 is not the recommended API for connecting to Exchange Server 2013.  However, I understand that some customers have existing applications that they may need to get working for Exchange 2013 before they upgrade. If you fall into this category this article may help you until you can migrate your application to a better API.

There’s no business like the healthcare business, like no business I know

Irving Berlin eat your heart out. There’s no business like the healthcare business or so it seems from a recently published info-graphic in the Wall Street Journal. Where are the jobs in America? You guessed it, healthcare. But is that a healthy thing for the economy, or a leading indicator of an insidious illness? First of all, let me apologize to every clinician reading this. As a doctor myself, I know there is nothing more distasteful to a physician, nurse, or anyone else who works in healthcare…(read more)

The Data Driven Quality Mindset

“Success is not delivering a feature; success is learning how to solve the customer’s problem.” – Mark Cook, VP of Products at Kodak

I’ve talked recently about the 4th wave of testing called Data Driven Quality (DDQ). I also elucidated what I believe are the technical prerequisites to achieving DDQ. Getting a fast delivery/rollback system and a telemetry system is not sufficient to achieve the data driven lifestyle. It requires a fundamentally different way of thinking. This is what I call the Data Driven Quality Mindset.

Data driven quality turns on its head much of the value system which is effective in the previous waves of software quality. The data driven quality mindset is about matching form to function. It requires the acceptance of a different risk curve. It requires a new set of metrics. It is about listening, not asserting. Data driven quality is based on embracing failure instead of fearing it. And finally, it is about impact, not shipping.

Quality is the matching of form to function. It is about jobs to be done and the suitability of an object to accomplish those jobs. Traditional testing operates from a view that quality is equivalent to correctness. Verifying correctness is a huge job. It is a combinatorial explosion of potential test cases, all of which must be run to be sure of quality. Data driven quality throws out this notion. It says that correctness is not an aspect of quality. The only thing that matters is whether the software accomplishes the task at hand in an efficient manner. This reduces the test matrix considerably. Instead of testing each possible path through the software, it becomes necessary to test only those paths a user will take. Data tells us which paths these are. The test matrix then drops from something like O(2n) to closer to O(m) where n is the number of branches in the code and m is the number of action sequences a user will take. Data driven testers must give up the futile task of comprehensive testing in favor of focusing on the golden paths a user will take through the software. If a tree falls in the forest and no one is there to hear it, does it make a noise? Does it matter? Likewise with a bug down a path no user will follow.

Success in a data driven quality world demands a different risk curve than the old world. Big up front testing assumes that the cost to fix an issue rises exponentially the further along the process we get. Everyone has seen a chart like the following:

clip_image001

In the world of boxed software, this is true. Most decisions are made early in the process. Changing these decisions late is expensive. Because testing is cumulative and exhaustive, a bug fix late requires re-running a lot of tests which is also expensive. Fixing an issue after release is even more expensive. The massive regression suites have to be run and even then there is little self hosting so the risks are magnified.

Data driven quality changes the dynamics and thus changes the cost curve. This in turn changes the amount of risk appropriate to take at any given time. When a late fix is very expensive, it is imperative to find the issues early, but finding issues early is expensive. When making a fix is quick and cheap, the value in finding a fix early is not high. It is better to lazy-eval the issues. Wait until they become manifested in the real world before a fix is made. In this way, many latent issues will never need to be fixed. The cost of finding issues late may be lower because broad user testing is much cheaper than paid test engineers. It is also more comprehensive and representative of the real world.

Traditional testers refuse to ship anything without exhaustive testing up front. It is the only way to be reasonable sure the product will not have expensive issues later. Data driven quality encourages shipping with minimum viable quality and then fixing issues as they arise. This means foregoing most of the up front testing. It means giving up the security blanket of a comprehensive test pass.

Big up front testing is metrics-driven. It just uses different metrics than data driven quality. The metrics for success in traditional testing are things like pass rates, bug counts, and code coverage. None of these are important in data driven quality world. Pass rates do not indicate quality. This is potentially a whole post by itself, but for now it suffices to say that pass rates are arbitrary. Not all test cases are of equal importance. Additionally, test cases can be factored at many levels. A large number of failing unimportant cases can cause a pass rate to drop precipitously without lowering product quality. Likewise, a large number of passing unimportant cases can overwhelm a single failing important one.

Perhaps bug counts are a better metric. In fact, they are, but they are not sufficiently better. If quality if the fit of form and function, bugs that do not indicate this fit obscure the view of true quality. Latent issues can come to dominate the counts and render invisible those bugs that truly indicate user happiness. Every failing test case may cause a bug to be filed, whether it is an important indicator of the user experience or not. These in turn take up large amounts of investigation and triage time, not to mention time to fix them. In the end, fixing latent issues does not appreciably improve the experience of the end user. It is merely an onanistic exercise.

Code coverage, likewise, says little about code quality. The testing process in Windows Vista stressed high code coverage and yet the quality experienced by users suffered greatly. Code coverage can be useful to find areas that have not been probed, but coverage of an area says nothing about the quality of the code or the experience. Rather than code coverage, user path coverage is a better metric. What are the paths a user will take through the software? Do they work appropriately?

Metrics in data driven quality must reflect what users do with the software and how well they are able to accomplish those tasks. They can be as simple as a few key performance indicators (KPIs). A search engine might measure only repeat use. A storefront might measure only sales numbers. They could be finer grained. What percentage of users are using this feature? Are they getting to the end? If so, how quickly are they doing so? How many resources (memory, cpu, battery, etc.) are they using in doing so? These kind of metrics can be optimized for. Improving them appreciably improves the experience of the user and thus their engagement with the software.

There is a term called HiPPO (highest paid person’s opinion) that describes how decisions are too often made on software projects. Someone asserts that users want to have a particular feature. Someone else may disagree. Assertions are bandied about. In the end the tie is usually broken by the highest ranking person present. This applies to bug fixes as well as features. Test finds a bug and argues that it should be fixed. Dev may disagree. Assertions are exchanged. Whether the bug is ultimately fixed or not comes down to the opinion of the relevant manager. Very rarely is the correctness of the decision ever verified. Decisions are made by gut, not data.

In data driven quality, quality decisions must be made with data. Opinions and assertions do not matter. If an issue is in doubt, run an experiment. If adding a feature or fixing a bug improves the KPI, it should be accepted. If it does not, it should be rejected. If the data is not available, sufficient instrumentation should be added and an experiment designed to tease out the data. If the KPIs are correct, there can be no arguing with the results. It is no longer about the HiPPO. Even managers must concede to data.

It is important to note that the data is often counter-intuitive. Many times things that would seem obvious turn out not to work and things that seem irrelevant are important. Always run experiments and always listen to them.

Data driven quality requires taking risks. I covered this in my post on Try.Fail.Learn.Improve. Data driven quality is about being agile. About responding to events as they happen. In theory, reality and theory are the same. In reality, they are different. Because of this, it is important to take an empiricist view. Try things. See what works. Follow the bread crumbs wherever they lead. Data driven quality provides tools for experimentation. Use them. Embrace them.

Management must support this effort. If people are punished for failure, they will become risk averse. If they are risk averse, they will not try new things. Without trying new things, progress will grind to a halt. Embrace failure. Managers should encourage their teams to fail fast and fail early. This means supporting those who fail and rewarding attempts, not success.

Finally, data driven quality requires a change in the very nature of what is rewarded. Traditional software processes reward shipping. This is bad. Shipping something users do not want is of no value. In fact, it is arguably of negative value because it complicates the user experience and it adds to the maintenance burden of the software. Instead of rewarding shipping, managers in a data driven quality model must reward impact. Reward the team (not individuals) for improving the KPIs and other metrics. These are, after all, what people use the software for and thus what the company is paid for.

Team is the important denominator here. Individuals will be taking risks which may or may not pay off. One individual may not be able to conduct sufficient experiments to stumble across success. A team should be able to. Rewards at the individual level will distort behavior and reward luck more than proper behavior.

The data driven quality culture is radically different from the big up front testing culture. As Clayton Christensen points out in his books, the values of the organization can impede adoption of a new system. It is important to explicitly adopt not just new processes, but new values. Changing values is never a fast process. The transition may take a while. Don’t give up. Instead, learn from failure and improve.

If you want to be notified when your app is uninstalled, you can do that from your uninstaller

A customer had a rather strange request.
“Is there a way to be notified when the user uninstalls any
program from Programs and Features (formerly known as
Add and Remove Programs)?”

They didn’t explain what they wanted to do this for,
and we immediately got suspicious.
It sounds like the customer is trying to do something
user-hostile, like seeing that a user uninstalled a program
and immediately reinstalling it.
(Sort of the reverse of

force-uninstalling all your competitors
.)

The customer failed to take into account that there are many
ways of uninstalling an application that do not involve
navigating to the Programs and Features control panel.
Therefore, any solution that monitors the activities of
Programs and Features may not actually solve the customer’s problem.

The customer liaison went back to the customer to get more
information about their problem scenario,
and the response was,
that
the customer is developing something like an App Lending Library.
The user goes to the Lending Library and installs an application.
They want a way to figure out when the user uninstalls the application
so that the software can be “checked back in” to the library
(available for somebody else to use).

The customer was asking for a question far harder than what they needed.
They didn’t need to be notified if the user uninstalled
any application from the Programs and Features control panel.
They merely needed to be notified if the user uninstalled
one of their own applications from the Programs and Features
control panel.

And that is much easier to solve.

After all, when an application is installed, it registers
a command line to execute when the user clicks the Uninstall button.
You can set that command line to do anything you want.
For example, you can set it to

Uninstall­String = "C:Program FilesContoso Lending LibraryCheckIn.exe" ⟨identification⟩

where ⟨identification⟩ is something that
the Check­In program can use to know what program
is being uninstalled, so that it can launch the real uninstaller
and update the central database.

One-Liner: PowerShell Tools for Visual Studio 2013

Unsure if I posted this already, and sorry if I did. http://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597 Adam Driscoll posted Visual Studio extension that allows it to edit PSH. It’s not as feature-rich (IMPO) as PowerShell’s native ISE, but … ITS VISUAL STUDIO!!! Sadly, it doesn’t work with Visual Studio Express.  Only the full on version can use extensions….(read more)

Chef with PowerShell DSC Now Public!

Many of you have seen the demos done by our friends at Chef, which show how they planned to leverage PowerShell DSC.

Those plans are now public as of the publishing of the PowerShell DSC Cookbook for Chef announced in the recent blog post by Adam Edwards.

Check it out here:  http://www.getchef.com/blog/2014/07/24/getting-ready-for-chef-powershell-dsc/

The Chef team has been working hard to get this together, and it’s great to see this going live!

 

- The PowerShell Team

What MGX Felt Like, in Pictures

Last week I went to Microsoft Global Exchange! [Microsoft Confidential information left out, of course.]

  • MGX is an annual conference for Microsoft employees from all around the world.
    •  
    • I connected with my peers… (Photo below: I’m the ginger)
    •  


    •  
    • Learned that being a Microsoft Academy College Hire (MACH) is awesome
    •  


    •  
    • Learned to understand more about Microsoft’s business strategy

 

    •  
    • Engaged and asked questions


    •  
    • Built a hands-on project with a team of other MACHs, practiced teamwork, and wept like Oprah when it was announced that not only were the projects actually going to be donated to people in need, but then those people WALKED IN THE ROOM. 


    •  
    •  
    • Went to parties.

    •  
    • And more parties.
    •  

    •  
    • SO. MANY. PARTIES.