How to control your World with Intune MDM, MAM (APP) and Graph API
Get link
Facebook
X
Pinterest
Email
Other Apps
VSS yedekleme testi nasıl yapılır
Exchange üzerinde bulunan verilerin yedeklenmesi (backup) ve geri yüklenmesi (restore) baslibasina çok önemli bir konudur. Bir yedegin saglikli alinmasi kadar restore isleminin basarili bir biçimde yapilabilmesi de test edilmesi gereken önemli bir islem. Exchange destegi olan (aware) diye adlandirdigimiz yazilimlar exchange writer'lari kullanarak VSS teknolojisi ile yedek alirlar.
Yedekleme esnasinda karsilasilan sorunlarin büyük bölümünün nedeni, yazilimlarin uyumsuzlugu ya da bu yazilimlardaki yanlis bir ayar olabilmektedir. Bunun tespiti için, yani yedek alma sirasinda sorunun VSS Writer'dan mi, disk sisteminden mi ve/veya yedekleme yazilimindan mi kaynaklandigini anlayabilmek için Betest aracini kullanabilirsiniz.
BETEST, Windows SDK yada Volume Shadow Copy Service SDK 7.2 (sonraki versiyonlarda mevcut) içerisinde bulunan yardimci bir araçtir. Araci kolaylikla bulabilir ve kurabilirsiniz. Kurulum islemini exchange sunucunuza yada herhangi bir windows sunucuya yapmanizi önermeyiz. Bir desktop makinaya kurulum yapildiktan sonra ilgili kütüphaneden BETEST araci test yapilacak ortama tasinip ve test yapilabilir.
Betest ile ilgili hatirlatmak istedigimiz önemli nokta, bu yazilimin sadece test amaçli kullanilmasidir. Bu programi hiçbir zaman normal bir yedekleme yaziliminin ikamesi olarak kullanmayin. DPM gibi bir yedekleme yaziliminiz yoksa Exchange Server yedekleri için Windows Server Backup'i da kullanabilirsiniz
Betest ile yedekleme yapmak için uygulamaniz gereken adimlar:
Volume Shadow Copy Service SDK 7.2'yi asagidaki web adresinden indirip kurulumunu gerçeklestirin.
Isleme baslamadan önce, Exchange Writer'larinin durumunu kontrol etmek gerekecektir. Bunun için komut satirinda "VSSadmin list writers" komutunu çalistirin. Eger komutu çalistirdiginiz sunucuda sadece aktif durumda veritabani varsa, Microsoft Exchange Writer'in listelendigini göreceksiniz:
Pasif kopya da bulunuyorsa, Microsoft Exchange Replica Writer'da ayrica listelenecektir.
Listelenen writerlar ile ilgili özellikle dikkat edilmesi gereken iki bilgi, State ve Last error durumlaridir. Bunlarin sirasiyla Stable ve No error olmalari gerekmektedir.
Eger bu writerlardan bir tanesi Failed olmus ise, Information Store ya da Replication Servisini yeniden baslatarak, State'in Stable duruma geçmesini saglayabilirsiniz. Information Store'un yeniden baslatilmasinin kullanicilarin mailbox baglantilarini kesintiye ugratacagi unutulmamalidir.
Bu adimda, Betest'in yedekleme konfigürasyonunu alacagi Components.txt dosyasini olusturacagiz. Bunun için öncelikle Notepad programini açin.
Components.txt içerisindeki konfigürasyon dosyasinin genel formati asagidaki gibidir. Bunu alacagimiz veritabanina uygun olarak degistirecegiz.
"<WriterId>": "<component-logical-path>" {"target" # "new target", ...}, ..."<component-logical-path>" : '"<subcomponent-logical-path>,...";
Geri kalan bölümlere ise, yedegi alinacak veritabaninin mantiksal lokasyonu ve GUID'i girilmektedir. Yedegini alacaginiz veritabaninin GUID'ini asagidaki komutla Exchange Management Shell üzerinden ögrenebilirsiniz:
Get-mailboxdatabase "database--ismi" | fl guid
Asagida, aktif bir veritabani için olusturulan Components.txt dosyasi görülmektedir.
"{76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}":"Microsoft Exchange Server\Microsoft Information Store\Mailbox1\a03774fa-434a-49cd-8f99-79d932f5be71";
Yukarida, "{76fe..." Writer'in idsi, "mailbox1" sunucunun ismi, "a0377..." veritabaninin GUID numarasidir. Eger veritabani pasif kopya ise, Microsoft Information Store'dan sonar birde Replica eklenmelidir:
"{76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}":"Microsoft Exchange Server\Microsoft Information Store\Replica\Mailbox1\a03774fa-434a-49cd-8f99-79d932f5be71";
Yedekleme testini, ayni anda birden fazla veritabani için yapmak isteyebilirsiniz. Bunun içinde, her bir veritabaninin mantiksal path bilgisini, tirnak içerisinde ve virgülle ayirarak eklemeniz gerekmektedir:
"{76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}": "Microsoft Exchange Server\Microsoft Information Store\Replica\MailboxSunucu\5df67a32-5f44-4585-ad0e-962b70f399d3","Microsoft Exchange Server\Microsoft Information Store\Replica\MailboxSunucu\35e64d4a-7c6b-41f8-a720-068d2798b908","Microsoft Exchange Server\Microsoft Information Store\Replica\MailboxSunucu\5afe57ab-c14d-4bf9-8a69-78691fad5a33";
Dosya içerigini yukarida belirtildigi sekilde hazirladiktan sonra bunu Components.txt adiyla, C:\Program Files (x86)\Microsoft\VSSSDK72\TestApps\betest\obj\amd64 klasörü altina kaydedin.
Yukarida komutta "c:\betest" yedeklerin olusturulacagi path, output.txt ise yedekleme durumunun görülecegi ve yedekleme islemi bittiginde olusacak olan log dosyasidir. Sariyla isaretlenen lokasyonu istediginiz gibi degistirebilirsiniz. Asagidaki resimde, Betest tarafindan yedegi alinan veritabani için olusturulmus klasörler görülmektedir.
Yedegin alinmasi sirasinda hatayla karsilasilmasi halinde, sorunun Exchange Writer'dan kaynaklandigi söylenebilir. Islem sonucunda olusan Output.txt dosyasini, inceleme yapilmasi için Microsoft Teknik Destek Merkezine iletebilirsiniz.
Burak Petekkaya
AppStack: Microsoft EcoSystem for Enterprise Web Application Hosting
Before we think about Microsoft Application Hosting component (AppStack), I want to establish my personal preference. Applications nowadays are mixture of UI component and associated web services built in ASP.Net/WCF/WF. My aim is to build application that is Fast, Secure and Resilient and I'll coin the term "FSR" to describe these three attributes.
In order to achieve this goal from application's perspective I prefer applications to be stateless and this extends to servers hosting these applications to be stateless as well. Now there are pros and cons to make application/server stateful and stateless but in long term I would like to think that stateless configuration is more desirable with scalability in mind.
Application Request Routing (ARR): This extension of IIS shows true power of its flexibility by modular architecture. This is very advanced reverse-proxy based solution implemented on top of URLRewrite to increase Web application's scalability and reliability through rule-based routing, client and host name affinity, load balancing of HTTP server requests, and distributed disk caching.
Application Initialization (AppInit): IIS Application Initialization helps to improve responsiveness of web site by loading Web applications before first request arrives. By proactively loading and initializing all dependencies such as database connections, compilation of ASP.NET code, and loading of modules, you can ensure Web sites are responsive at all times even if it uses a custom request pipeline or if Application Pool is recycled or during overlapped recycling.
AppFabric (Caching): This is about making your application fast, scalable and resilient by having distributed memory based cache cluster. AppFabric caching features can help scale your .NET applications easily and inexpensively by allowing you to combine the memory capacity of multiple computers into a single unified cache cluster. These features include Caching Services, Cache Client, and Cache Administration tools. AppFabric Caching Services are highly scalable, allowing many computers to be configured as nodes of a cache cluster that is available as a single unified memory cache. Caching Services provide a high-availability feature that supports continuous availability of your cached data by storing copies of that data on separate cache hosts. When high availability is enabled on a multi-server cluster, your application can still retrieve its cached data if a cache server fails.
AppFabric (Hosting): This is my silver bulletfor effectively manage WCF/WF services on IIS7. AppFabric-Hosting Services includes features provided by Workflow Management Service, including lock/retry, auto-start, durable timers, and a command queue.
It provides persistence that works right out of the box. It uses the SQL persistence store that ships with .NET Framework 4, and create a default persistence database that your applications can leverage, which allows you to scale your stateful services across a set of computers.
AppFabric Monitoring service allows you to perform health monitoring and troubleshooting of running WCF and WF services, and to control those services.
It uses security accounts and SQL Server logins and database roles to determine the access a user or application has to system resources such as persistence databases, timer data, monitoring data, and configuration files. Access to these resources occurs at both application and management levels, which are the two areas of logical scope that relate to the AppFabric security model.
To top all that AppFabric Dashboard gives you visibility into the health of a system, and unified configuration user interface gives you control over your service configuration. I cry whenever I see WCF/WF services deployed on IIS without AppFabric!
There are more bells and whistles to add on above list that makes IIS excellent candidate to host your applications but for now it will do. In next post I will describe sample architecture to build web farm using Microsoft WebStack (described in earlier post) and AppStack.
Architecture Diagram: Blueprint for Enterprise Web Infrastructure
In the last two posts we have seen components for Microsoft WebStack and AppStack . In this post I will try put those together to build scalable web farm architecture that will act as our blueprint. Following functional diagram represent the logical view of web farm.
Above diagram represents architecture of two-tier data driven application as described below:
We have two ARR servers to distribute incoming requests to front end servers. These ARR server use memory based and disk based cache for static content. ARR serves are load balanced by either NLB or Hardware load balancer as purely Layer 4 (TCP) load balancer because ARR is acting as Application level load balancer.
Front End servers are stateless server deployed in DMZ responsible for serving ASP.Net pages to clients. These servers don't store any information locally. Session state and any use specific information is stored on AppFabric Caching cluster with local cache enabled.
We have another set of ARR servers to load balance backend service.
Backend servers are responsible for WCF/WF services. These servers are running AppFabric Hosting services, Monitoring and caching client.
We have AppFabric caching cluster for distributed memory cache with fault tolerance enabled to optimize performance and resiliency.
We have SQL Cluster hosting multiple databases needed by Application/ASP.Net and AppFabric. There are three important database for AppFabric:
AppFabric Caching Database to store caching cluster information
Persistence database to persist Workflows
Monitoring Database to store AppFabric Monitoring information.
We have two WFF servers running in Active-Passive mode to manage the WebFarm.
At any given time only Active WFF server is available on network for management and on passive node WebFarm service is disabled. These two servers are configured with Shared-config to synchronize configuration information among them so that in the event of failure of primary node Passive node can be bring online by starting the WebFarm Service.
WFF takes care of adding and removing servers in Web farm. It is configured with two Web farms where servers are grouped together depending upon their role either in FrontEnd or BackEnd webfarm.
One server in each farm is marked as primary server that is not participating in receiving incoming traffic. These servers are used for synchronising application to other servers in farm. (Purely for non-technical (Admin) reasons primary server is not handling active client requests.)
Web applications can be provisioned/updated in two ways:
First and preferred way is via self-service method. Delegations are enabled on application and users are designated as Site/Application administrator. They connect to Primary server remotely in each farm and upload new application. This is the preferred method because of self-service nature of it and no associated administration overhead. WFF will ensure that changes made to primary servers are replicated to other servers in web farm.
Developers can publish a WebDeploy Package from publishing server to central file store. Administrators will create a Platform Provisioning task and as a part of it WebDeploy action will be executed on all servers and application will be provisioned.
Demystifying .Net CLR Perfromance Counter
Hi Friends!!!
I am back in business and apologies for out of contact for a while. Lot of exciting thing happened over past few months. Finally I landed where I always wanted to be so thanks for your support so far and yes your guess is right from the blog addressJ.
I was waiting to right my first post with some exciting information. Exciting means hidden to the world and internet search engines could not find! In my usual style here it goes:
Scenario:
I was at customer site and my task was to monitor the IIS System health and that obliviously means monitoring the application health. I decided to setup performance counter for a start and I opened the Process counter as shown in following figure:
So server was hosting multiple web application and sever had multiple worker processes. In the Instance list all worker process was appearing as w3wp, w3wp#1, w3wp#2 and w3wp#n and so on. Problem was which one is associated with correct application pool?
Resolution:
I executed the following command to get a list of worker processes running on the server and associated process ID.
I applied the registry changes as said and voila!!! I got the output as below:
Job done! Hum... I am sure you are wondering what is the point of having this blog entry? Such a waste of time isn't it? If you are thinking this than wait...
I opened .Net CLR Counters and specifically .Net CLR Exceptions and .Net CLR Memory and I saw the following:
Hum!! Something didn't work as expected. I verified the registry entries and appeared correct to me. I came to conclusion that this registry modifications are not working for .NET CLR counters. Question was how to monitor .Net CLR counters for specific application pool only?
I decided that it is time to call the internal help line and guess what within hours I got reply and reply was amazing that I decided to write this blog.
Guy told me to look for under ".Net CLR Memory" Performance object and asked me "what is the 4th last counter you see?" I was amazed what it has to do with 4th last counter? I looked the counter and it was full of surprise!!! It seemed that "Promoted Finalization Memory from Gen1" is the holly grail!!!! The last value of that counter was actually the worker process ID!! Now you got the relationship between .Net CLR performance counters and Worker process ID as well.
Takeaway:
When you have .Net 1.1 installed it loads the .Net 1.1 performance counters and there is no easy way to get this relationship established. However, in .Net 2.0 onwards you can unload the .Net 1.1 counters and reload the .Net 2.0 specific counters with following instructions:
1) Go to \Windows\Microsoft.NET\Framework\v2.0.50727
2) Run "unlodctr .NetFramework"
3) Run "lodctr corperfmonsymbols.ini"
This should load the .Net 2.0 counters from corperfmonsymbols.ini. So what is the difference? Well, see the screenshot below:
Now you have the real name of performance counter as well along with real value. Demystifying the secret! Though you have counter listed, it will not work with .Net 1.1.
Well this solution provided as it is and you are at your own risk! Please don't take it as an official supported way to implement and it is kind of working solution. We should expect a lot of improvement on this side with .Net 4.0 so wait and watch!
PS: Special Thanks to Mr. A. Kamath who assisted me with this problem.
How To Migrate SSL Certificate Using MSDeploy
Hi Friends!
Scenario:
Yesterday I was asked to help out IIS 6 to IIS 7 migration with close to 500 sites with 50 of them were SSL website. I thought it is good time to evaluate MSDeploy. It has a support to migrate SSL certificate between the servers and it has really appealed me to use that tool. I opened built in help and I found the following syntax:
As you expected, (and reason behind this post) it failed. Have a look at following screenshot:
It failed the first time because I copied the HASH directly from the HTTPCFG output. It failed due to space in a hash. I removed the spaces and tried second time and it failed too very strange error. "Certificate not found in store". I checked and double checked that certificate does exist, certificate is marked with private key exportable and IIS website with SSL certificate is working over SSL connection. Out of clue and permutations and combinations!!
Takeaway:
I forwarded the problem to internal discussion group and within minutes I got reply from Andreas Klein and that was amazing.
I executed the following command:
Do you notice the random spaces in Hash? They are actually not space. They are '0' that human eye cannot see!!!!!!!!
E.g. A certificate Hash reported by HTTPCFG as "db12 09c20e1 be61a4d86644067604118ee7dfa" should actually be "db12009c20e10be61a4d86644067604118ee7dfa". Instead of 0, HTTPCFG report it as ' '.
There is a problem in a way that HTTPCFG reports the certificate hash. I hope it will save some of your time while doing Migration with MSDeploy.
How To Use MSDeploy to Migrate Global Assembly Cache
Hi,
I am playing with MSDeploy quite a lot these days and it is great. I just want to share information about how we can use it to install assembly in GAC.
I wrote a strongly typed assembly for a test and installed it on my Windows 7 laptop. Here is an information about that assembly:
Here is syntax to synchronize assembly from my machine to other machine:
Alternatively, if you do not want to synchronize between machines, you can archive the package and move the package files to production server and synchronize production machine with archivedir.
Isn't it great? I think it is awesome rather using GACUTIL.EXE or MSI installer.
IIS 6.0 Log Header
Those of you, who support IIS6, it might be obvious to notice the following line at the start of IIS6 log file:
#Software: Microsoft Internet Information Services 6.0
#Version: 1.0
#Date: 2009-01-30 02:05:59
#Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status
It appeared to me that (because so far I never read exact configuration!!) when you perform following actions, HTTP.SYS will insert the extra header in log file:
1) Restart the Web Site
2) Restart the IIS Server.
3) When new log files get created.
4) Logging Configuration Changes
While troubleshooting IIS problem, I generally try to relate server event with one of those above but I caught multiple times when I could not. In addition to above list, HTTP.SYS has an idle time out for log file. To Preserve System Resources especially for server hosting large number of sites, HTTP.SYS will close the file handle after 15 minute of inactivity! So if you receive request at every 16th minute, you may notice that above header multiple times.
On a side note, IIS 6 logging is not synchronous hence you may need to wait for sometime before entries get flushed to disk and it is frustrating! I believed that restarting website or IIS server will force logbuffer to flush the entries but more often than not, it is not possible in live environment just to flush logbuffer. I also believed that changing Logging configuration forces IIS to write the log to disk but that is not the case either.
However in IIS7, we have a new option. Try following command by yourself to see the result, if you haven't:
netsh http flush logbuffer
Makecert.exe (Kind Of) SAN and Wildcard certificate
Disclaimer - Makecert is deprecated and following will only work for testing in IE as this is not true SAN certificate.
I often run into issue while working in the Test VMs and expired certificates. This article talks about how to generate self-signed root certificate and generate new certificate signed by the root. However what about what if you just want a certificate that is self-signed and just works? Well we can use makecert but unfortunately it is quite a challenge to figure it out where to get this tool and is it worth the effort?
Simpler solution is below command to create your own wild card certificate (*.contoso.com) available for you to download the certificate!
Above certificate has EKU set for Server and Client Authentication, Code Signing and Secure Email so hopefully it should work for most common purposes.
If you want certificate with other domain name's i.e. Subject Alternative Name (SAN) in your VM/Test configuration for *.contoso.com and *.fabrikam.com
Remember to import this certificate in your "Trusted Root Certification Authorities". Password to import the certificate is: Pa$$w0rd
You can distribute this certificate via Group policy in your domain.
Open the Group Policy object (GPO) that you want to edit.
In the console tree, Expand to following path"Policy Object Name/Computer Configuration/Windows Settings/Security Settings/Public Key Policies/Trusted Root Certification Authorities"
On the Action menu, point to All Tasks, and then click Import.
This starts the Certificate Import Wizard, which guides you through the process of importing a root certificate and installing it as a trusted root certification authority (CA) for this GPO.
With credit to Brett Johnson he has done a wonderful job in highlighting the great features of accessibility throughout our Modern Workplace suite.
As you might know, Microsoft's mission is to empower every person and every organisation on the planet to achieve more. There are features that allow people to work more efficiently built into our solutions, such as colour settings.
Did you know that around 70% of people have a hidden disability, neither did I.
EN 301 549 is a government mandate that allows for accessibility requirements are suitable for public procurement of ICT products and services in Europe, and Microsoft takes this seriously.
Something I didn't know either, and although not colour blind myself, I have worked with colleagues which meant I needed to consider coloured reports and actions I submitted. But also, did you know that built into Windows you have options that help people see colours normally? The latest Windows 10 update allows for filters for colour and contrast settings.
Go to Settings / Ease of Access / Colour and High Contrast.
Toggle Apply Filter on. Choose Deuteranopia, Protanopia or Tritanopia from the dropdown menu.
After the toggle is turned on, you can turn the colour filter on and off using the keyboard shortcut Windows key + Ctrl + C.
I have a friend that is deaf in one ear and can't hear stereo audio…
The mono audio option came to Windows 10 through the creators update and can be found under:
Go to Settings / Ease of Access / Other Options.
Toggle Mono Audio on.
Now they can hear both channels and get all the sound, and not miss anything.
Better writing aid for everyone, including Dyslexia…
As I'm writing this post in Microsoft Word, I have all the settings turned on for spelling, grammar and word suggestions. Although not a natural writer, I'm often lazy in reading, typing and all of the above. With these new enhancements, it means that even non-dyslexic people need extra help.
Writing Assistance
- Definitions help Language Learners when they have trouble choosing the right word.
- In principle, even native speakers make mistakes and should be careful about the effect of their writing.
- An error description makes the squiggles clear, ensuring users understand the issue.
- Contextual access to explanations helps language learners deal faster with common mistakes.
- Dyslexic users often struggle with many aspects of reading and spelling. We have improved suggestions too.
This feature is a little less known and slightly hidden away, so there are some options you won't have turned on by default:
Open Microsoft Word, Open a new document.
Go to File / Options / Proofing.
Under "Writing Style", hit the Settings... button.
Now select the options that are pertinent to you.
Read aloud…
As I mentioned I am lazy, I have a bad habit of not reading through emails and posts to correct things. But for those with sight difficulties, there are a few different options in Windows and Office, and I personally use this on a regular basis… reading aloud.
Probably the most common is using Narrator to navigate around the user interfaces and tag sentences or text to read.
The quickest way to turn this on/off is:
Press the Windows logo key + Ctrl + Enter.
To see all Narrator commands, press Caps Lock + F1 after you open Narrator.
If your device has a touchscreen, tap it three times with four fingers.
Reading aloud is great but as part of our garage project you can download a small add-in that allows dictation directly into Word and Emails. Great for those with difficulties typing:
There's also another project that's being integrated into PowerPoint called Presentation Translator, this as a few neat features such as the ability to automatically transcribe you talking into subtitles underneath your PowerPoint presentation. Next time you're in a meeting consider what I learnt earlier… around 70% people have a hidden disability. Can they hear you properly? Can they understand you? You might want to think about turning this on for your next presentation. Maybe someone in the audience doesn't have English as their first language… This tool also allows for the audience to have the transcription translated to a language of their choice (10 for spoken questions, 60+ for written ones)…including Welsh!
Brett Johnson @Brettjo - Credit for this blog post and information
Azure Information Protection: Ready, set, protect!
Classification and labelling documents within our companies is becoming more and more important as we approach the deadline of May 2018 to implement procedures to comply to GDPR (General Data Protection Regulation) https://gdpr-legislation.co.uk/ also see https://www.microsoft.com/en-us/trustcenter/privacy/gdpr within the Microsoft trust center around the work we are doing on GDPR.
Dan Plastina who heads up Microsoft's Azure Information protection posted an article to get you introduced to how to classify and protect your documents.
Enterprise Mobility Technical Specialist Microsoft UK.
Can the concept of Cloud PBX be economically attractive?
Author: Robert-Jan Gerrits & Graham Hosking
Introduction: Skype for Business Cloud PBX was launched in December 2015 as one of the new O365 Skype for business services . But what does a Cloud PBX actually do and why should organisations have a look at this new PBX concept?
A PBX (private branch exchange) system is the abbreviation for a corporate telephony system. We've all used the traditional corporate telephony system which as the same basic principles as your phone system at home, allowing you to make and receive calls from the telephony network as well as make and receive internal calls (calls between two handsets). Fig1. Typical PBX freestanding Legacy PBX systems can be large pieces of equipment, quite often expensive to maintain due to specialised skills required, especially when the PBX is end-of-life or out-of-support. Because of the high initial cost of most PBXs as well as the approach that when it isn't broken no need to fix it, PBXs are longer term investments ranging from 7 to over 10 years of service.
For a PBX to be to make and receive calls to/from the telephony network, organisations will usually have a contract in place with a Telco or Service Provider to provide a connection between their PBX and the telephony network and provide billing for their calls. As is the case with personal mobile contracts (with a mobile operator), it is most of the times difficult to change or cancel a contract without penalty.
Fig2. Traditional Call flow from PSTN lines (Outside) to On-Premises
Cloud PBX: The concept of the Cloud PBX is to replace this expensive piece of on-premises equipment with a telephony service equivalent, hosted in the cloud, however this solution can also work with your existing on-premises PBX to bridge the gap between existing investment and future enhancement. Giving you the time and pace to move to the Cloud PBX while maintaining the existing telco/SP connection. The Cloud PBX functionality is invoked through signalling between the on-premises phones and the cloud PBX, while the voice path of the call is established from the on-premises phone through an on-premises Cloud Connector Gateway to the telephone network: Fig3. Using existing investments on-premises and leveraging cloud capabilities.
Advantages of Cloud PBX: The advantages of using a Cloud PBX service include: • Expensive PBX equipment and associated maintenance can be taken as a service from the Cloud • Existing Telco agreements can leveraged to prevent costly breaches of telco contracts and/or lose hard negotiated attractive call minutes pricing
Cloud PBX with PSTN Calling: At a later stage, when the organisation's telco/SP contract comes up for renewal, or when the organisation wants to get rid of all on-premises equipment including the on-premises Cloud Connector Gateway, the organisation could decide to move their telco/SP contract to the cloud, ie. O365, as well. This is called using Cloud PBX with PSTN Calling. As before, the Cloud PBX functionality is invoked through signalling between the on-premises phones and the cloud PBX, but in this approach the voice path of the call is established from the on-premises phone through the O365 telephony Gateway to the telephone network:
Fig4. Cloud PBX in Cloud, connecting to On-Premises Advantages of Cloud PBX with PSTN Calling: The advantages of using a Cloud PBX service with PSTN Calling include: • Previously mentioned Cloud PBX advantages • Removal of all on-premises telephony equipment including the on-premises Cloud Connector Gateway • Single contract for O365 Cloud PBX services as well as telephony Call plans (telco/SP contract)
Conclusion: The concept of Cloud PBX can deliver economical advantages over on-premises telephone systems (PBX) as outlined above. There are many ways customers can migrate from an on-premises PBX to a cloud PBX so speak to your Microsoft account team or Microsoft voice partner for more information to take you on this journey. Skype for Business is much more than just a telephone system replacement, by expanding the possible options for your staff to connect to the outside world will ultimately also extend your businesses reach. Whether you're a small business with a handful of staff or a large enterprise, make sure you're trialing Skype for Business. Resources: See Cloud PBX in action: https://blogs.office.com/2015/11/30/a-deeper-view-into-skype-for-business-cloud-pbx/ Simplify your communications: https://blogs.office.com/2016/01/21/cloud-pbx-with-skype-for-business-simplify-communications-in-the-cloud/
Collaboration highlights from Ignite - what's in it for you?
There's a raft of blogs and articles out there about what's been announced at Ignite 2017, but I wanted to share a quick update with some exciting key highlights, links to videos, PPT's and articles that might be of interest to you. - yes there's a tonne I've missed.
Although we'd all love to have enough time to watch every session, sometimes we can't #firehose ;)
Here's my highlights specifically for productivity and collaboration:
Advance eDiscovery - import non-office 365 data on premises such as legacy file shares - consistent tool for cases.
Customer key BYOK - meet compliance needs, they use their own keys to encrypt mailboxes and files in Office 365
Office 365 Message Encryption - easier to encrypt emails for end users apply encryption to "do not forward" emails or other custom templates.
Non-Office 365 user can authenticate and read protected message user google or yahoo identities, in addition to the options like OTP or Microsoft account.
Guest access in Microsoft Teams allows teams in your organization to collaborate with people outside your organization by granting them access to teams and channels.
Roadmap - Multi-geo / connect existing site to a new group / manage group sites via SharePoint / Expiry policy - in app renewal and custom email notifications
At Ignite this week we are going to announce several new areas around modern management. We will also look back at where it all started with System Management Server (SMS) 25 years ago the first code was written. Take a look at Brad's blog about where it all began.. https://blogs.technet.microsoft.com/enterprisemobility/2017/09/21/55447/
Connecting people without the internet - PSTN Conferencing
Introduction:
In a world of technology there's things we use on a daily basis that's integrated into our lives and have been around for years, from the humble telephone, the internet, faxes to mobile phones. And with all new the announcements around the Skype for Business offerings it's always good to know that the simple things in life are always still there.
Although the world of the internet allows us to connect with a single application to people both inside and outside the organisation, what happens if you don't have an internet connection. Consider being in a premises without broadband, as there are 17% of UK premises without superfast broadband, let's include the simple things…
Office 365 has had third party voice conferencing for a number of years and the evolution of multi-party conferencing has reached the next evolution with built-in features into the E5 licensing suite.
Connecting to that opportunity without the internet:
Alongside your Skype for Business Enterprise Voice system you need an ally. Enter PSTN conferencing… Consider your sales team trying to arrange a key meeting with a new customer, they might not have good devices or good connectivity, so dial in can be very useful in these very common scenarios.
Ease of use:
Dial-in/PSTN audio conferencing allows for you to schedule a meeting and not only allow people to connect via a normal web browser or Skype for Business client but also through a UK based telephone number and unique PIN to connect to your meetings. Excellent for those that don't have a reliable internet connection or are unable to connect, say while driving to their next appointment (Using their hands-free of course).
PIC: Joined up collaboration
Here's what it looks like. I've schedule a normal meeting with my Outlook and pressed the Skype Meeting button. Invited my internal and external attendees and they choose how they want to connect.
PIC: One place to connect people:
Join the meeting either via the web link shown below as: "Join Skype Meeting" or call the telephone number. *numbers shown arefictitious.
Once your Office 365 administrator has assigned the correct license to you, within minutes you'll be able to send out your personalised meeting requests with traditional voice dial-in details.
Other Advantages:
Let's consider just some of the other advantages of having PSTN conferencing:
One Place – You only need to schedule meeting through Outlook. Nowhere else to go or setup.
Pricing – One supplier to manage. There's no need to have 3rd party or long term contracts.
Granularity – You may find that not all staff members need this, so you choose who is setup for these features.
Time to implement – Within minutes this can be setup and allocated to staff members. Automated and easy to use.
Training - Minimal training is required as staff are already used to Outlook. No need to search for the details next time your in a meeting.
Conclusion:
There's a few options in how to get PSTN conferencing so speak to your Microsoft account manager or licensing partner. Skype for Business is much more than just a telephone system replacement, by expanding the possible options for your staff to connect to the outside world will ultimately also extend your businesses reach. Whether you're a small business with a handful of staff or a large enterprise, make sure you're trialing Skype for Business.
Data Stories Brought to Life: Power BI in the Public Sector
Within public sector data interfaces between similar and dissimilar systems is very common. For example having 1 system that's key to your customers, such as your CRM system and linking this with financial, revenue and benefit systems.
How you get a clear view of what's happening within your data environment today across all of the departments and sections? Do you have the facility for real time dashboards to see how your organisation is working?
Enter Power BI...
Power BI is a tool from Microsoft that transforms your company's data across the board into rich visuals for you to collect, organise, filter and focus on what matters to you.
To explain this even further, lets show you. Below are examples of how data can be manipulated in a way that you understand. For example the local government link illustrates very large data sets for crime, road collisions, trees and planning.
Demystifying Project Rigel - One push of a button conferencing
What is Project Rigel? = The idea to simplify a meeting room with just one push of a button.
There's still a bit of confusion around this new initiative and what it means for businesses like yourselves. Basically Microsoft has partnered up with 2 vendors, Polycom and Logitech to provide a meeting solution that will fit into over 97% of meeting rooms that you may have today!
The idea behind this is to use and extend existing equipment that you may already have such as projectors, flat screen TV's and then expand their use into a more fully fledged conference room solution.
Logitech has announced a range of products that will interconnect with an external projector and a Surface Pro to provide the same experience as a Skype Room system today. Allowing you to book this resource into your next meeting and have the facility to join a conference using their centre of room, 'smart dock' that contains a normal touchscreen Surface Pro tablet.
Polycom on the other hand will have their Group series in-room video conferencing solution qualified to work on Skype for Business and be supported on Office 365. They will have a new, redesigned user interface and will suit different size board/meeting rooms that will also inter-operate with Skype for Business.
To bring the concept back and demystify what Project Rigel is: The facility to leverage a wide range of partner devices that will work seamlessly with Skype for Business as a cohesive collaboration and meeting environment.
"Uniting Skype for Business in Office 365 with Polycom's high-quality audio and video solutions gives customers the most complete collaboration toolkit for the modern workplace," said Zig Serafin, Corporate Vice President, Skype for Business.
This new initiative will be available the 2nd half of 2016.
"JustGiving wanted to match up millions of donors with causes they actually care about, so they turned to the power of the cloud and Microsoft Azure to help manage the process."
JustGiving wanted to match up millions of donors with causes they actually care about. But this is just too big for the human brain to calculate – so they turned to the cloud and Microsoft Azure to help manage the process. https://aka.ms/Ih4me7
To ensure they could match potential givers to charities and causes they would really care about, JustGiving needed to do some number crunching of epic proportions. They looked to Microsoft and the cloud to do the job. See how it's benefiting tens of thousands of great causes.
Ever wondered how JustGiving manage to match donors to causes they really care about? Well here's the answer…
Flowchart to the right Voice Solution
Lets face it, choosing the right voice solution is tough!
There's lots of different types and now we thrown in the on premises vs. off premises decision and totally replacing your existing telephone system with Enterprise Voice. To make life easier Microsoft has a handy flowchart to walk you through the right voice solution for your organization: an all-in-the cloud solution delivered by Office 365 or a hybrid solution that combines on-premises software and Office O365 services.
You will also need to consider how to deliver telephony functionality along with access to the Public Switched Telephone Network (PSTN) for all users in your organization. But before we jump into that lets understand some of the key functions:
Enterprise Voice: is Microsoft's software-powered Voice over Internet Protocol (VoIP) solution, included in on-premises deployments of Skype for Business Server. It is a full Enterprise Class PBX system (telephone system) that uses PSTN connectivity through your local service provider for calls and lines.
Cloud PBX: is Microsoft's technology for enabling call control and PBX capabilities in the Office 365 cloud with Skype for Business Online. Skype for Business Cloud PBX allows you to replace your existing telephone system with a set of features directly delivered from Office 365 and tightly integrated into your businesses productivity requirements. What do I get with Cloud PBX?
Lets get started! = Click on the image below to read in full view:
Voice Calling Plans:
If you're looking to move to Cloud PBX - With PSTN Calling (Everything in the cloud), then you might want to consider what calling plans you need. Take a look here for more up to date information:
Looking at the public data that's available, we can see a shift between Vmware AirWatch to Microsoft Enterprise Mobility Suite for new customers. This is because Microsoft has some features that you won't find with other Mobile Device Management solutions....
Skype for Business is a huge business tool for both inside Microsoft and for thousands of our customers in the UK. Customers that we speak to want to ensure its users are empower and to leverage different devices however they're concerned about the security of these devices. Recently I've been speaking with number of police forces wanting to provide Enterprise Voice via Skype for Business to its officers, however see the freedom of allowing anyone to use any device, even personal ones, being a major blocker!
During the recent update to Skype for Business client we've started to incorporate security and protection as part of the Enterprise Mobility Suite, similar to Outlook, OneDrive, OneNote etc; Features such as: Conditional Access and Data Loss Protection.
This solution is designed to only allow applications that are managed and compliant with company policy to be able to connect. You now have the option to use a non-managed device, but automatically enrol the app (detected from the corporate logon credentials) so that both the app and the data is protected.
Devices that are already enrolled will be presented with a PIN to logon into the Skype for Business app (separate to the PIN logon to the device, providing an extra/separate layer). Because the Mobile Application Management (MAM) is configured by groups or user, it means that the protection will now follow them and not the device.
Encrypt your company data: You'll notice that in the polices there's an option to encrypt the data within the application:
Alright then, We've setup the polices and added our users. What does it look like to them?
When you use the device and open an application that is controlled by the MAM, the user will be automatically promoted that it's a corporately controlled app and to set a logon PIN. All this without IT or the user to secure the device manually.
How to control your World with Intune MDM, MAM (APP) and Graph API
Over the last year the Microsoft Intune development team have been working hard on changing how we think about managing our end users multitude of devices and the way they work. We have been working to change how companies deal with ensuring users get the correct experience across the devices and applications they need but most importantly getting access to the right data in a secure way.
Microsoft for many years now have been cross platform in management from our Windows devices, managing the iOS and Mac ecosystem to maintaining the many versions of Android including Android for Work.
With our moving of our Intune console to the Ibiza Azure Portal https://portal.azure.com it's opened up a much larger ability to share information across the whole of Azure. The Azure portal is built on blades and allows the sharing and accessing of information, for example I am working within the Intune part of the console and I can directly access Azure AD information.
With the new console we are constantly adding in new features and driving the ability for Intune to manage across a MDM (Mobile Device Management) and MAM (Managed Application Management) not only that we enable rights within Azure AD to apply capability around Conditional Access.
When the Microsoft Developers undertook the porting of the Intune console to the Azure portal they built it from the ground up by implementing the console to be built on Graph API. I quote from our developer team
"Manage Intune with Azure portal and Graph API. One console. One set of APIs. Limitless possibilities. Intune offers you more flexibility than ever before to manage and secure your enterprise. See how you can now use Intune on the Azure portal and also integrate Intune with your existing systems through Microsoft Graph API."
The Intune UI is built on those same Graph API controls allowing incredible flexibility to manage your environment how you want through the console or via Azure Power Shell. In future articles I will be going into the Graph API in further detail.
AD RMS to AD RMS to Azure Information Protection Part 1 The Scenario: So, you have read my previous blog posts about AD RMS side-by-side migration and Enterprise Migration from AD RMS to AIP using SCCM but unfortunately both of those articles assume best case scenario for the original AD RMS cluster. Sadly, that is not always the way things work. In the real world, the AD RMS instance may have been initially installed on Windows Server 2003 using RMS 1.0 and was subsequently upgraded to 2008 R2 keeping all of the settings pretty much the same. This usually means using http only and having no CNAMEs for AD RMS or SQL. This makes my happy articles on upgrading to newer versions of AD RMS or to AIP a lot less straightforward. Let's fix that. The Setup: Luckily, most of the concepts for migration are the same as what I documented in the previous two articles, so I am going to happily plagerize reuse the content in those articles to make something new. This a...
Recursive CTEs continued ... In this post, I will finish the discussion of recursive CTEs that I began in my last post. I will continue to use the CTE examples from Books Online . To run these examples, you'll need to install the Adventure Works Cycles OLTP sample database . In my last post, I explained that all recursive queries follow the same pattern of one or more anchor sub-selects and one or more recursive sub-selects combined by a UNION ALL. Similarly, all recursive query plans also follow the same pattern which looks like so: |--Index Spool(WITH STACK) |--Concatenation |--Compute Scalar(DEFINE:([Expr10XX]=(0))) | |-- ... anchor sub-select plan(s) ... |--Assert(WHERE:(CASE WHEN [Expr10ZZ]>(100) THEN (0) ELSE NULL END)) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr10YY], [Recr10XX], ...)) |--Compute Scalar(DEFINE:([Expr10ZZ]=[Expr10YY]+(1))) ...
Comments
Post a Comment