Herain Oberoi is a Group Product Manager for SQL Server and spent 30 minutes with me talking about StreamInsight and BI in general.
AW: StreamInsight seems like a specialized in memory db?
HO: No, it’s a stream, though it would be fair to call it a specialized use of memory – in memory DB is more what we use for PowerPivot where we still treat data as rows and columns, heavily compressed.
AW: Activity thresholds and latencies are tied to licenses, do you have guidance on how much hardware is needed to support those levels?
HO: No, it really depends on a lot of factors – memory, CPU, complexity of the events.
AW: What happens if you reach a point where you’ve exceeded the recommended levels for a license?
HO: We continue to process the events, but the latency increases.
AW: What happens if the latency starts to get really bad and you reach a point where you’re using all available memory, do you drop events and if so, is there a way to know that?
HO: Need to ask Torstein to get exact details, but in general we’ll keep queuing as long as possible, when that threshold is reached we might buffer to disk, but more likely we would drop events and log it.
AW: Is there a concept equivalent to execution plans as in the main engine? How are queries applied?
HO: Nothing that is quite the same. We apply the expressions and we might have multiple expressions to evaluate that at a really high level is similar to what you could see in a query plan, but we don’t expose those for viewing.
AW: How much does the cost of the expression change based on complexity of the LINQ query and the evaluation window?
HO: No good way to answer that, depends on how much data is being evaluated.
AW: Have you looked at scenarios that don’t match the design goals – say a super low value number of transactions such as a call center that wants to monitor agent status on a per minute basis?
HO: Certainly there are times when we are designing to support a really high transaction rate, say equipment at a major hospital, and that scales down nicely at a smaller hospital. Beyond that it can be used at a much lower transaction rate if you still need need low latency answers.
AW: Do you have prescriptive guidance for how much hardware is needed based on transactions, complexity of expressions, etc?
HO: Not yet.
AW: Can you tell me a little more about other stuff that your team is involved with?
HO: You saw a lot of it in the keynote. We’re really interested in self service reporting, helping users to engage in collaborative decision making – getting feedback such as ‘do you all agree with the report?’ kind of scenarios, and of course having just released R2, we’re now looking at lessons learned and starting on features for the next release.
AW: Can you tell me about your cloud strategy for BI?
HO: Our goal is that you should be able to provision to a local cloud or a public cloud, you pick that one that makes the most sense for your business.
AW: We have SQL Azure, but I haven’t seen anything that is BI announced yet – do you have anything planned there?
HO: Over time we hope to make all of our products available in the cloud, but we don’t have any announcements today.
End of Interview.