As software projects get more complex, managing dependencies becomes an increasingly critical task for developers. Dependencies, the external packages or libraries that your project relies on, can significantly impact your application's security, maintainability and compatibility. This is particularly true in the .NET ecosystem, where projects often rely on a vast array of NuGet packages.
In this article, we'll explore effective strategies for managing these dependencies, with a focus on identifying and mitigating vulnerabilities, leveraging tools such as GitHub Dependabot, and discussing other open source alternatives that can bolster your security posture.
Understanding Dependency Management
Before diving into the tools and practices, it's essential to grasp what dependency management entails. Dependency management involves:
- Identifying dependencies: Knowing what libraries or frameworks your project relies on.
- Vulnerability tracking: Monitoring dependencies for any security vulnerabilities and addressing them promptly.
The Importance of Keeping Dependencies Updated
Regularly updating dependencies is crucial for several reasons. Primarily, it ensures your application is secure from known vulnerabilities often patched in later versions of the dependencies. Moreover, updates can bring performance improvements, new features and compatibility with newer technologies, enhancing your project's overall quality and lifespan.
One of the biggest areas where I see developers mess up is not checking to see if the dependencies in their code have security vulnerabilities open against them. This is especially true with code that hasn't been updated in several years, but is still being used in production. Security vulnerabilities in code like this could lead to intellectual property losses, data breaches and worse.
Tools for Managing Dependencies
If you use GitHub, then GitHub Dependabot is an invaluable tool that automates the monitoring and updating of dependencies -- and it's free.
Integrated directly into GitHub, Dependabot scans your project's dependency files (such as `.csproj` or `packages.config` for .NET projects) and compares the package versions you are using against the GitHub Advisory Database. If it finds you are using a vulnerable package, it will open an alert in the repository, as shown in Figure 1. This alert gives you details of the vulnerability, as well as information on what version of the package to upgrade to in order to resolve the vulnerability.
Dependabot can also automatically generate pull requests to update the code to the non-vulnerable new version, and can even let you know when new versions of the package become available.
As mentioned, Dependabot is free for GitHub users. There is also a paid version available for Azure DevOps users as part of GitHub Advanced Security for Azure DevOps.
Other Open Source Tools
While Dependabot is a powerful tool for managing dependencies, several other open source tools can complement its capabilities:
- NuGet Package Explorer: A Windows application that allows you to view the contents of NuGet packages, explore their dependencies and determine the package's compatibility with different versions of .NET. This tool is essential for manually reviewing dependencies before incorporating them into your project.
- OWASP Dependency-Check: An open source tool that identifies project dependencies and checks if there are any known, publicly disclosed vulnerabilities. Although it requires manual setup and integration into your build process, its comprehensive database of vulnerabilities makes it a valuable tool for .NET developers.
- Snyk: Though not entirely open source, Snyk offers a free tier and integrates well with .NET projects. It scans dependencies for vulnerabilities and provides detailed remediation guidance. Snyk can run within your CI/CD pipeline, ensuring vulnerabilities are caught early in the development cycle.
Best Practices for Dependency Management
To effectively manage your .NET project dependencies, consider the following best practices:
- Regularly Review and Update Dependencies: Leverage tools like Dependabot to automate this process, but also allocate time for manual review, especially for major version updates that might introduce breaking changes.
- Adopt a Security-First Mindset: Prioritize security updates and apply them as soon as possible. Use tools like OWASP Dependency-Check and Snyk to identify potential vulnerabilities and address them promptly.
- Understand Your Dependencies: Before adding a new dependency, evaluate its necessity, license compatibility and its own dependency tree. This can prevent introducing unnecessary risks into your project.
- Educate Your Team: Ensure that all team members understand the importance of dependency management and are familiar with the tools and practices you've adopted. This collective awareness can help maintain a secure and stable codebase.
Effective dependency management is a cornerstone of modern software development, particularly in complex ecosystems like .NET. By leveraging tools such as GitHub Dependabot and incorporating other open source solutions into your workflow, you can significantly enhance the security and maintainability of your projects.
Remember, the goal is not just to react to vulnerabilities but to proactively manage your dependencies to prevent issues from arising. With the right tools and practices, you can.
Posted by Mickey Gousset on 03/26/20240 comments
Remember when TypeScript, or C#, or even C++ was new, and you wished you'd known they were going to "be big" so you could be the person ahead of the curve instead of struggling to catch up to where everybody else seemed to be already?
To help stay ahead of that curve, longtime software development expert Ted Neward takes the time to continually scour the coding landscape -- he actually pores through GitHub repos, for example -- to look for new languages that might not be what you end up using in your day job in 2025, but which could expose and refine the concepts that define the language that will.
One thing that he's looking for is a language that ups the abstraction game, providing a means, for one example, to automatically handle memory management, an onerous, time-consuming (and often blatantly unsafe) task that had to be done manually in C++ but is taken care of natively by languages such as Rust.
Neward shares his expertise in major developer conferences, and his next stop is at Visual Studio Live! Chicago, where he'll present "Busy Developer's Guide to Next-Generation Languages."
Attendees are promised to:
- Learn different approaches to coding
- Take away a different way of thinking about building apps
- Get a glimpse into the potential future
We caught up with Neward to get a sneak peek into his session and to learn more about the next-generation languages he's been tracking.
VisualStudioLive! What inspired you to present a session on next-generation languages?
Neward: For the last decade I've been awaiting the next round of languages that elevate our abstraction level another notch, and I've been somewhat disappointed that they haven't seemed to take root. There's a lot of interesting ideas out there, and I'm pretty sure that if developers (and their management) can get a sense of what we gain by taking this next step up, we'll gain as an industry -- in productivity, in reduction in cognitive complexity, and in security and quality, among other things.
Can you briefly explain what a next-generation language is?
Basically, a language that takes a significant step away from the dominant paradigm (object-oriented or object/functional hybrid) and introduces something "new" into the mix.
A next-generation language "takes a significant step away from the dominant paradigm (object-oriented or object/functional hybrid) and introduces something 'new' into the mix."
Ted Neward, Principal, Neward and Associates
Could you provide one example of a next-generation programming language that you believe is poised to make a significant impact in the future?
Sure: One language to have a look at is Ballerina, a service-oriented programming language that runs on top of the JVM. Because it puts services (that is to say, the same things we talk about when we speak of HTTP APIs, but it goes beyond HTTP in a big, and quite natural, way) front-and-center in the language, we find that writing a new service from scratch turns into one, maybe two files, and a dozen lines of code at most, to get a Docker image fully ready-to-deploy in any cloud service you care to name. Most OO languages have a really hard time keeping up with that, because they're built to solve a different problem.
What are a couple of examples of the different approaches to coding that these next-generation languages encourage? How do they differ from traditional programming paradigms?
The first, already mentioned, is that of service-oriented: What happens when we make services a first-class construct? Or let's think about UI: so much of what we do is write a bunch of objects that have to work together -- what if we took a look at abstracting one level up, and treated the web (HTML/CSS/JS) as an implementation detail rather than something the developers have to stare in the face all the time? What if we elevated user interface to a set of language constructs?
How might these emerging languages influence the way we think about and approach app development? Can you give an example of how they could change our current development practices?
Typically, when we elevate a level of abstraction, we gain a significant reduction in visible complexity -- developers don't have to worry so much about physical details, so we're able to spend less lines of code on dealing with physical restrictions. Other object-oriented or object-functional language/framework combinations try to accomplish this (React, Angular and so on), but we still get tripped up on all these niggling details.
Consider this: when we wrote code in C++, we had to spend half the code dealing with memory management. When we elevated the abstraction level to memory-managed languages like Java and C#, suddenly a whole lot of worries about physical details (memory management) went away, and we were able to free up brain space for other things.
Drawing from your experience, how do you predict which programming languages or concepts will become more prominent in the future? What indicators do you look for?
Oh, if I were any good at that, I'd have made a lot more money as a fortune teller! A large part of the process is to examine the problems we currently deal with as developers.
For developers who want to stay ahead of the curve, what strategies would you recommend for learning and adapting to these next-generation languages? How can they prepare themselves for the shifts in programming trends?
Frankly, one way I stay ahead of the curve is to do some aggressive browsing -- for example, I'll go up to GitHub, and do a repository search for "programming language" just to see what comes up. Most of the first five pages are recognizable, like Python or Ruby, but once you get past the mainstream open-source languages, you find some really interesting candidates.
How can developers evaluate the long-term viability and industry adoption potential of a new programming language? What factors should they consider when deciding whether to invest time in learning one of these languages?
Does it solve an actual problem? Does it let you not worry about something, or let you build a thing in fewer lines of code than with your traditional language of choice? Does it allow you to do some things at a design level that would be really tricky or expensive to do now?
Take a reasonably small problem (something larger than a TODO list, for example) and try building it using the new language, and see how well or how fast it goes. Don't expect that you'll convince anybody at work to use it right away, but you never know -- if it solves a problem the company is staring down, and the company is committed to using technology as a competitive advantage, you could very well be the person that brought the company the advantage they needed over their competitors!
Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Register for VSLive! Chicago by the March 1 Super Early Bird deadline to save up to $400 and secure your seat for intensive developer training in Chicago!" said the organizer of the developer conference.
Posted by David Ramel on 02/27/20240 comments
This quick reference guide shares 15 must-know shortcuts for improving efficiency and productivity in Visual Studio. Master shortcuts for debugging, refactoring, formatting, searching, and more!
Get the PDF!
Sign up below to access the guide “15 (More) Great Visual Studio Keyboard Shortcuts”. Fill out the sign-up form with your name and email address, then use the password provided to access the PDF. Save or print out these useful shortcuts to keep handy while coding for improved speed and efficiency in Visual Studio.
Posted on 02/26/20240 comments
As a small teaser for my upcoming Copilot Engineering Everywhere workshop at VSLive! Las Vegas, I thought I'd give you an introduction to GitHub Copilot.
In the rapidly evolving landscape of software development, AI has emerged as a game-changer, offering tools that augment human capabilities and making coding more efficient and accessible. In this article, we'll explore what GitHub Copilot is, explain how it works and walk you through a simple demo to get you started.
What Is GitHub Copilot?
GitHub Copilot is a cutting-edge AI-powered code completion tool developed by GitHub and powered by a generative AI model developed by GitHub, OpenAI and Microsoft. It acts as an intelligent assistant for developers, suggesting entire lines of code or even whole functions as you type. GitHub Copilot is designed to understand the context of your code, making it possible to suggest relevant code snippets, implement patterns, and even generate code for new projects based on the description in comments.
As you type in your code editor, GitHub Copilot dynamically analyzes the context of your codebase and the comments you write to suggest relevant code snippets. These suggestions can be accepted, modified or rejected, providing a seamless coding experience that can significantly speed up the development process.
Getting Started with GitHub Copilot
To demonstrate the power of GitHub Copilot, let's go through a simple Python project where we'll create a function to calculate the factorial of a number. You will need the following if you want to try this yourself:
- Visual Studio Code (VS Code) installed
- GitHub Copilot extension installed in VS Code
- An active GitHub Copilot subscription
Let's run through the following steps:
1. Open VS Code and create a new Python file named "factorial.py."
2. Let's add a comment at the top of the file describing, in a couple of sentences, what we are trying to accomplish. This helps set the context for GitHub Copilot. We are going to add the following comment block to the top of my file:
# I want to write a function that takes a number and returns the factorial
# of that number. I will use a recursive function to do this. I also want to
# write the appropriate code to call the function and print the result.
3. As we press Enter, GitHub Copilot suggests a complete function implementation for calculating the factorial. You can accept the suggestion by pressing Tab or Enter, and the entire code block will be inserted into our file.
4. Press Enter a couple more times, and it suggests the code to call the function to test it, along with the expected results.
5. Run the code and confirm that it returns the correct results.
This is just a basic introduction to using GitHub Copilot. As you can see, it generated all the code for us based on our initial comments at the top of the file. We can add more comments directly in the file to ask GitHub Copilot to assist us with whatever we might need -- for example, creating other functions or code relevant to our project.
What I have shown above just scratches the surface of the GitHub Copilot ecosystem. There is also GitHub Copilot Chat, which is a chat interface directly in your IDE where you can ask Copilot coding-related questions.
GitHub Copilot in Modern Development
GitHub Copilot represents a significant leap forward in how developers write code. It not only speeds up the coding process but also helps in learning new languages and frameworks by providing contextually relevant suggestions. We are not Python developers and don't know the Python language. But using Copilot, as shown above, we were able to create a Python script to accomplish our task.
GitHub Copilot is more than just a code completion tool; it's a glimpse into the future of software development, where AI partners with humans to enhance creativity and efficiency of coding. Whether you're a seasoned developer or just starting, GitHub Copilot offers a unique opportunity to enhance your coding experience and take your projects to the next level.
And I'll leave you this teaser: If you do decide to attend my workshop, you'll learn more about both GitHub Copilot, as well as how to use Microsoft and open-source tools to build your own Copilot. Happy coding!
Posted by Mickey Gousset on 02/23/20240 comments
You wouldn't really name a variable "x," would you?
But it happens.
And it's not just "x." It's "usrdb," "item," "foo," "test," "blah" and more. These are real variable names that have been encountered in production-grade codebases.
"And that's just scary!" says Adrienne Braganza-Tacke.
The senior developer advocate at Cisco has become a specialist in naming variables, which is actually more complicated and important than it may seem at first. Luckily, she's bringing her expertise to the Visual Studio Live! developer conference in Chicago, where she'll present a session titled "Variables of the Veracious Variety: How to Better Name your Variables" on April 30, 2024.
The very existence of such a presentation brings to mind the famous quote by Phil Karlton: "There are only two hard things in computer science: cache invalidation and naming things."
Attendees of the introductory/intermediate 75-minute session won't be learning anything about cache invalidation, but they will be learning about the importance of naming things and how to do it better, being promised to:
- Understand what makes variable naming difficult
- Recognize what makes a variable clear, concise, and consistent
- Explore variable naming patterns for almost any type of variable
We caught up with Braganza-Tacke to get a preview of her session, and to learn more about the importance of naming variables in software development.
VisualStudioLive!: What inspired you to present a session on naming variables?
Braganza-Tacke: I tend to talk about topics that I think every software developer should be knowledgeable about and variable naming is one of them. What really inspired me to put together a whole talk, though, was just how many bad variable names there are in real-world applications. x, usrdb, item, foo, test, blah ... these are real variable names I've encountered in production-grade codebases. And that's just scary! As with most of my talks, there's a lot of common software development sense that's not so common. That's why I'm putting my decade of experience, team-tested trials, and actionable advice together into one nice talk for anyone that wants to level up their variable naming!
What are some common challenges developers face when naming variables, and why is it often considered one of the harder tasks in computer science?
Typically, it's really hard to get the essence of a concept down to a single word or very short phrase; how do you convey a customer that has surpassed the maximum number of refund transactions allowed in a month as clearly and concisely as possible? Now, take that thought experiment and consider that different words mean different things to different people. The same words can even have different meanings or connotations within different contexts. Add onto that the global nature of software development, where English is not always the first language of the developer.
"Developers are basically asked to choose variable names that convey a concept that is also explicitly clear to their team, can't be confused with other concepts, and makes sense within the context of the codebase they are working in."
Adrienne Braganza-Tacke, Senior Developer Advocate, Cisco
Taking all of this into consideration, developers are basically asked to choose variable names that convey a concept that is also explicitly clear to their team, can't be confused with other concepts, and makes sense within the context of the codebase they are working in. While it's definitely more of an art than an exact science, these challenges tend to make good variable naming a difficult task for developers.
In your view, what are the key characteristics that make a variable name clear, concise, and consistent? Could you provide an example of a well-named variable and explain why it's effective?
I'd answer, but I'd hate to spoil my talk 😉 Come listen to it at Visual Studio Live instead!
Can you share a before-and-after example where you improved a variable name and what your thought process and rationale were behind the change?
We actually go through a few examples in my talk where we improve it in real-time with the audience! In essence, we take a very vague, ambiguous and concept-void variable name and slowly iterate on it to become more concise, meaningful and context-filled.
What is one example guideline or best practice you would recommend for developers to follow when naming variables to ensure clarity and consistency?
When naming variables that are integers, be explicit! For example, say you're naming a variable that holds the amount of delay (in milliseconds) for a tooltip. Instead of tooltipDelay, a better name could be tooltipDelayInMilliseconds. When you're dealing with a number or count of something, say so! For example, if you have a variable that holds how many accounts are currently being requested, numberOfAccounts or numberOfRequestedAccounts is much more explicit than just accounts. Yes, the variable names are a bit longer. However, the cognitive load you remove for that extra effort to make them explicit can make a world of a difference when reading, re-reading and understanding code.
For developers who are just starting out, what strategies or exercises can they use to develop their skills in naming variables effectively? How can they get better at this seemingly simple yet complex task?
Go back to some code you've written in the past and see if there are variables that you can improve. Are there parts that you re-read or have trouble understanding? Would a more meaningful variable name help? If so, change those variables! Another thing you can do: Moving forward, resist the temptation to use "easy" variables. As you write code, take some time to think of a more meaningful variable name than temp, user, item, or other common concepts. Instead, consider the domain you are in, what the code is doing, and what concepts you are trying to convey. You can think of it as making your code a bit more wordy or verbose, but in reality, you're making it more readable and robust. And you're helping your future self when you have to re-read in a few months; you'll know exactly what you meant by being so explicit!
Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Register for VSLive! Chicago by the March 1 Super Early Bird deadline to save up to $400 and secure your seat for intensive developer training in Chicago!" said the organizer of the developer conference, which is presented by the parent company of Visual Studio Magazine.
Posted by David Ramel on 02/20/20240 comments
Creating AI-powered applications is a natural fit for the Microsoft Azure cloud, but sorting out all the different options for the backing data stores can seem daunting.
All developers are looking for an edge to build applications that are data-driven and harness the power AI, and Azure Databases provide a range of options for secure, scalable and highly available data applications using all the latest languages.
Microsoft's "Databases on Azure" guidance sums up many of the different options: "Develop AI-ready, trusted applications with a range of relational and non-relational databases and keep your focus on application development and innovation, not database management. Intelligence built-in helps automate management tasks like high availability, scaling, and query performance tuning so your applications are always on and ready. Trust Azure for built-in data security controls, broad regional coverage, and more compliance offerings to help strengthen your security posture and support your business growth."
Some of the options Azure offers include :
Purpose-Built Cloud Databases
- Azure Cosmos DB: Fast, distributed NoSQL and relational database at any scale -- Develop high-performance applications of any size or scale with a fully managed and serverless distributed database supporting open source PostgreSQL, MongoDB, and Apache Cassandra as well as Java, Node.JS, Python, and .NET.
- Azure SQL Database: Flexible, fast, and elastic SQL database for your new apps -- Build apps that scale with a fully managed and intelligent SQL database built for the cloud. Grow your applications with near-limitless storage capacity and responsive serverless compute, backed by AI-based advanced security.
Open Source Databases
- Azure Database for PostgreSQL: Fully managed, intelligent, and scalable PostgreSQL database -- Innovate faster with a fully managed PostgreSQL database service for open-source developers building scalable and secure enterprise-ready apps.
- Azure Database for MySQL: Scalable, open-source MySQL database -- Develop apps with a fully managed community MySQL database service delivering high availability, mission-critical performance, and elastic scaling for open-source developers building mobile and web apps.
- Azure Database for MariaDB: Fully managed, community MariaDB -- Build with a fully managed community MariaDB database service delivering high availability and elastic scaling for open-source developers.
- Azure Cache for Redis: Distributed, in-memory, scalable caching -- Accelerate your application's data layer with an in-memory data store that offers increased speed and scale for web applications, backed by the open-source Redis server.
Sorting out all those options and choosing the right tech for your particular application is so important it's the topic of a keynote address at the upcoming Visual Studio Live! developer conference slated for March in Las Vegas.
The keynote, titled "Developing Modern Data Applications Using the Power of AI with Azure Databases," will show you when and where to use the family of Azure Databases including Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB, and Azure Database for MySQL.
"You will see how to build applications to power AI applications driven by data and also AI technologies that speed up your development and improve quality with Microsoft Copilot technologies," said Bob Ward, one of the keynote presenters. The Principal Architect for Microsoft Azure Data will be presenting along with Davide Mauri, Principal Product Manager at Microsoft.
"You will also see various new options on how to use new techniques to access data with less code and integrate your data with Microsoft Fabric," Ward said.
Attendees are promised to learn:
- When and where to use various Azure Database choices
- How to build-driven AI applications and how to use Copilot experiences to improve the developer experience
- New experiences on how to access your data to save time and code
We recently caught up with Ward to learn more about his March 6 keynote in a short Q&A.
VSLive! You have such an impressive background. What inspired you to first get involved in data-driven development?
Ward: My first data development projects were on Unix systems writing C code with Ingres (the predecessor of today's PostgreSQL). I was working on healthcare applications, and I saw from that moment how important data is to any business or application. And I found out it was helpful to not just learn to build great code but learn the database side as well to write efficient queries and understand how a database worked. Learning these skills together helped really give me so many career opportunities.
With various Azure Database options like Azure SQL Database, Azure Database for PostgreSQL, Cosmos DB and Azure Database for MySQL, how does a developer determine the best choice for a particular data-driven application?
I realize looking at this lineup can feel a bit overwhelming. We will talk more about this in the keynote, but here is a way to look at it. If you like SQL Server and this is your preferred choice, then Azure SQL Database is perfect for you. But I have some customers who prefer an open source database and still want that "relational" feel. So, I tell him if you want to go that route, we have excellent options for managed cloud databases such as Azure PostgreSQL or MySQL per your choice and experiences. Azure Cosmos DB has traditionally been thought of as a "NoSQL" database. If you love working with native JSON data, you can easily use Cosmos DB. Cosmos DB now also offers experiences including integration with PostgreSQL and MongoDB.
"The great thing about all of these database options is that they take advantage of the power of Azure. This includes experiences for your favorite APIs and managed security, scalability, and availability. But most importantly all of them can be used to build new AI data-driven applications integrating with services like Azure OpenAI. We will talk about this in the keynote and have demos for you to see it in action."
Bob Ward, Principal Architect, Microsoft Azure Data, Microsoft
The great thing about all of these database options is that they take advantage of the power of Azure. This includes experiences for your favorite APIs and managed security, scalability, and availability. But most importantly all of them can be used to build new AI data-driven applications integrating with services like Azure OpenAI. We will talk about this in the keynote and have demos for you to see it in action.
In your personal experience, can you share one example of how GitHub Copilot and Chat improve the development process and enhance quality?
I can't wait to show some of the innovations here for developers. For me already the capabilities for GitHub Copilot to help me get started to build data applications is incredible, all right within the comfort of tools like Visual Studio or VS Code. But there are some other new experiences people may not know about -- our Microsoft Copilot story -- that will help developers. And we can't wait to show it to you during the keynote.
What future trends do you foresee in the integration of AI and database technologies?
Right now it feels the huge need is to build generative AI applications, but with their data. And there are many different methods to do this. So, what I see in the future is a merging of these methods into a smaller set of options that give developers the best way to build these applications quickly. I also see the concept of hybrid searching and even the use of small language models to become more popular over time. And per the previous question on Copilots, I believe we will see an explosion of this to help developers build and ship code faster but more efficient, performant, secure and with fewer bugs.
What do you hope people take away from your keynote?
The most important takeaway I want people to get from the keynote is that Azure Databases all-up are fully managed cloud database services you can rely on today and in the future to build intelligent AI data-driven applications. Our team is committed to staying on top of all the new innovations in AI and ensuring developers have what they need for real-world applications that are data driven using the power of AI without having to give up on the tried-and-true database capabilities of security, scalability and availability.
Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Register for VSLive! Las Vegas by the Feb. 9 Extended Early Bird Deadline to save $300 and secure your seat for intensive developer training in exciting Las Vegas!" said the organizer of the developer conference.
Posted by David Ramel on 02/05/20240 comments
Hey, everyone! I'm Mickey Gousset, a staff DevOps architect at GitHub, and I am thrilled to be contributing to the VSLive! blog as a monthly how-to columnist. While every month you can expect tips, tricks and best-practices advice from the development world, for this installment, I thought I'd share some of my predictions for 2024 in the field of software development.
Predicting the future of software development is always an intriguing task, as the field is rapidly evolving. Here are my top three predictions for the software development landscape in 2024.
1. The Rise of AI-Driven Development
OK, you knew this was going to be at the top of the list. ChatGPT took the world by storm in 2023. AI technologies, like machine learning and natural language processing, are becoming sophisticated enough to assist in code generation, bug fixes and even in making design decisions.
In 2024, tools like GitHub Copilot that suggest code snippets will become more advanced, helping reduce development time and improve code quality. AI will play a more integral role in software development, from initial design to testing and deployment. AI-driven development tools will continue to enhance developer productivity and creativity.
If you're curious about all this, look into attending VSLive! Las Vegas this year, where we have an entire track dedicated to "cutting-edge AI."
2. Growth in Cross-Platform Development
The demand for cross-platform development tools will continue to increase, as businesses seek to target multiple platforms (iOS, Android, Web, desktop) simultaneously.
As the mobile market continues to grow and the lines between desktop and mobile blur, developers are looking for efficient ways to build applications for multiple platforms. Cross-platform frameworks provide a cost-effective solution, enabling a single codebase to run on various platforms. We can expect significant improvements in these frameworks, with better performance, more native features and easier debugging.
For example, MAUI is a cross-platform framework introduced by Microsoft as part of .NET 6. It's an evolution of Xamarin.Forms, designed to created native mobile and desktop apps with a single codebase. You can learn more about MAUI on the "Developing New Experiences" track at VSLive! Las Vegas.
3. Continued Emphasis on DevOps and Automation
DevOps practices and automation will become more deeply ingrained in the software development process. Continuous integration and deployment (CI/CD) will continue to evolve, and automation will extend to more areas, such as security (DevSecOps).
The integration of development and operations has been proven to enhance efficiency, reduce errors and speed up deployment. In 2024, we can expect DevOps to evolve further, incorporating more automated processes all while not sacrificing quality. Tools that facilitate continuous testing, integration and monitoring will be crucial. The integration of security into the DevOps pipeline will also be a major focus.
If you're going to be at VSLive! Las Vegas, you can learn the latest on these evolutions in the "Modern Software Engineering" track.
Now, I don't have a crystal ball. I'm sure there is a trend I may have missed, or one that will jump to the top of the stack. But these three areas are definitely worth keeping an eye on. And keep an eye on this blog for future information, tips and tricks for the development world.
Posted by Mickey Gousset on 01/17/20240 comments
Microsoft Fabric was described as "the AI-fication of Microsoft's data business" by RedmondMag writer Joey D'Antoni when it was unveiled at the company's Build 2023 developer conference.
He listed these highlights of the new offering:
- Microsoft's Fabric is a comprehensive data analytics platform that integrates various data tools into a single SaaS offering, aiming to eliminate data silos and promote data sharing within organizations.
- Fabric leverages the concept of data fabric, combining modern trends like data lakes, the delta store, and parquet file formats, presented through a set of standard APIs.
- Fabric incorporates AI capabilities, including Power BI Copilot for DAX language and natural language query functionality. It also introduces Data Activator, a service that monitors data changes and triggers actions, resembling Logic Apps or Power Automate.
"Fabric is a big, bold step from Microsoft," D'Antoni said. "Competitors like Snowflake and partners like Databricks have been making inroads into a traditionally strong business intelligence and analytics market for Microsoft. While Fabric will remain a work in progress for some time, the level of investment and direction from the company in data analytics is promising."
That's a lot of functionality to take in via an article or two, so a hands-on, interactive, step-by-step presentation demonstration on creating Extract, Transform, Load (ETL) pipelines using Fabric might help Microsoft-centric developers get a better handle on the technology.
Luckily, Microsoft's Sally Dabbah is going to helm just such a presentation -- titled "Step-by-Step Guide: Building ETL Workflows in Microsoft Fabric" -- at the March Visual Studio Live! developer conference in Las Vegas.
"This hands-on session will empower attendees to gain a deep understanding of the ETL process, equipping them with practical skills to efficiently manage and transform their organizational data," said Dabbah, who established herself as a significant voice in Azure Cloud Analytics Services by posting regularly on Microsoft's Tech Community blog.
"By attending this session, participants will discover how to harness the power of Microsoft Fabric for seamless data integration, ensuring they can extract valuable insights from their data," she continued.
Attendees are promised to learn:
- Master ETL Workflow Creation: By the end of the session, attendees will be proficient in building ETL workflows from scratch, understanding the essential steps involved in data extraction, transformation, and loading
- Gain In-Depth Fabric Knowledge: Participants will acquire a thorough understanding of Microsoft Fabric, enabling them to leverage its features and capabilities for data integration and management effectively and to know better fabric concepts such as: OneLake,OneCopy, OneSecurity and so on.
- Enhance Data Insight Capabilities: This session will equip attendees with the skills needed to unlock valuable insights from their data, ultimately leading to more informed decision-making and improved organizational performance"
We recently caught up with Dabbah to learn more about her 75-minute, introductory-level presentation in a short Q&A.
VSLive! What inspired you to present a session on building ETL Workflows in Microsoft Fabric?
Dabbah: My inspiration came from recognizing the growing need for efficient data handling in today's data-driven world. As organizations increasingly rely on large and complex datasets, the ability to effectively extract, transform and load data becomes crucial. Microsoft Fabric, with its robust capabilities, offers an innovative solution. This session aims to demystify the process and empower professionals to harness these tools effectively.
For those unfamiliar, could you briefly explain what ETL (Extract, Transform, Load) workflows are, and why Microsoft Fabric is a significant tool in this context?
ETL workflows are processes used to extract data from various sources, transform it into a structured format and load it into a target system for analysis and reporting.
"Microsoft Fabric stands out in this context due to its scalability, integration options and advanced features, making it an ideal platform for managing complex data integration tasks."
Sally Dabbah, Data Engineer, Azure Cloud Analytics Services, Microsoft
Microsoft Fabric stands out in this context due to its scalability, integration options and advanced features, making it an ideal platform for managing complex data integration tasks.
Your session promises a comprehensive, step-by-step demonstration. Can you give us an overview of what this hands-on approach will look like and how it will benefit attendees, especially those new to ETL workflows?
The session will be structured as a practical, step-by-step guide. Attendees will be walked through the creation of an ETL workflow, starting from data extraction to final loading. This approach will be particularly beneficial for those new to ETL, providing them with a solid foundation and practical skills that can be immediately applied.
Microsoft Fabric offers various features for data integration. Could you highlight some key features or tools within Fabric that are particularly beneficial for building ETL workflows?
Microsoft Fabric offers several features that make it stand out for ETL workflows, such as high scalability, robust data processing capabilities and advanced data integration tools. I will highlight features like real-time data processing and customizable workflow options that cater to various business needs.
You mention that attendees will learn about advanced Fabric concepts like OneLake, OneCopy, OneSecurity and so on. Can you elaborate on how understanding these concepts will enhance their ETL workflow creation?
OneLake, OneCopy, OneSecurity: These advanced concepts represent the cutting-edge aspects of Microsoft Fabric. Understanding OneLake, OneCopy and OneSecurity enables users to manage data more securely, efficiently and in a unified manner. This knowledge enhances the ability to create more sophisticated and secure ETL workflows.
How will mastering ETL workflows in Microsoft Fabric enable attendees to unlock valuable insights from their data, and how can this lead to improved decision-making and organizational performance?
Proficiency in ETL workflows using Microsoft Fabric allows professionals to unlock deep insights from their data, leading to more informed decision-making and improved organizational performance. It equips them with the tools to handle complex data scenarios, thereby enhancing their data analytics capabilities.
Looking ahead, what are some emerging trends or advancements in ETL workflows and data integration that professionals should be aware of, and how does Microsoft Fabric fit into this evolving landscape?
The field of ETL and data integration is rapidly evolving, with trends like real-time data processing, cloud-based integration, and AI-driven analytics becoming more prevalent. Microsoft Fabric is well-positioned in this landscape, offering a platform that adapts to these emerging trends while continuing to provide robust and scalable solutions for ETL workflows.
Note: Those wishing to attend the conference can save hundreds of dollars by registering early, according to the event's pricing page. "Register for VSLive! Las Vegas by the Super Early Bird Deadline (Jan. 16) to save up to $400 and secure your seat for intensive developer training in exciting Las Vegas!" said the organizer of the developer conference.
Posted by David Ramel on 01/15/20240 comments
The consensus is that we are entering a new world following the COVID-19 pandemic, so this might be the time for developers to add to their skillset. The more skills you have the better you are positioned for jobs in the future where multifaceted developers will be in demand.
Companies restarting their business may not have the budget for hiring different developers for different tasks. They may need old-fashioned Jack-or-Jill-of-all-trades. Combining classic application development expertise with Website abilities may be just the ticket for success when the economy begins reopening.
For developers with desktop application development skills, gaining Web frontend language abilities would give you the ability to see Web apps as a whole, which could be valuable to a DevOps team. Or if you are interested in transitioning to a career in Web development, this is also the time to make your move.
If you are looking to add to your personal knowledge base, Visual Studio Live! has a seminar is for you! No prior experience is necessary. Become a Web Developer Quick and Easy is scheduled for June 29-30, 2020. It will be a virtual course you can attend online without leaving your home office, which is ideal for the current public health situation.
This seminar is designed for developers who have little to no proficiency with HTML, CSS, Bootstrap, MVC, JavaScript, jQuery or the Web API but would like to become knowledgeable in these standard Web building languages. However, familiarity with Visual Studio, Visual Studio Code, C#, and .NET is recommended. This is a two-day virtual course that will transform you into a Web developer.
The course is designed for any desktop programmer who wants to learn to develop Web applications using .NET and C#. Or, for a Web developer who would like to renew his knowledge of the basics of Web languages.
Here’s what the course covers:
As you know, HTML is the fundamental building block of Web development. You will get a thorough overview and understanding of what goes into creating a Web page. The basics of HTML elements and attributes are illustrated through multiple examples.
You’ll learn professional HTML techniques to make Website look great, be more efficient, and easer to maintain? To do this, Cascading Styles Sheets (CSS) are the answer. This part of this seminar serves as an introduction to CSS. You will learn the best practices for working with CSS in your HTML applications.
Today’s Websites are being accessed from a range of devices including traditional desktop PCs as well as tablets and smartphones. Developing Websites that are responsive to different size devices is easy when you use the right tools. Twitter’s Bootstrap is the tool of choice these days. Not only is it free, but it also has many free themes that allow you to modify the look and feel quickly. Learning bootstrap is easy with the resources available on the Web, however, having someone walk you through the basics step-by-step will greatly increase your learning.
Microsoft's MVC Razor language in Visual Studio is a great way to build Web applications quickly and easily. In this seminar you are introduced how to get started building your first MVC application.
The de-facto standard language for programming Web pages is JavaScript and the jQuery library. You will be introduced to both JavaScript and the jQuery. You learn to create functions, declare and use variables, interact with, and manipulate, elements on Web pages using both JavaScript and jQuery.
The Web API is fast becoming a requirement for developers to use to build Web applications. Whether you use jQuery, Angular, React or other JavaScript framework to interact with data, you need the Web API. You will be shown step-by-step how to get, post, put and delete data using the Web API.
You will come away from this virtual seminar with new skills including:
- HTML, CSS and Bootstrap Fundamentals
- MVC and Web API Fundamentals
- JavaScript and jQuery Fundamentals
Sign up today: Become a Web Developer Quick and Easy
Posted by Richard Seeley on 04/22/20200 comments
Continuous Integration and Continuous Deployment (CI/CD) is great in theory but like its Agile and DevOps companions it is not the easiest thing to put into practice.
GitHub Actions, introduced in the past year, are intended to make CI/CD easier for developer teams to implement. If you are struggling to keep your DevOps culture on track and make Agile work, it might be worth seeing if GitHub Actions can help. At least, GitHub and its new Microsoft owners sure hope so. Since Microsoft purchased San Francisco-based GitHub in 2018, some of the Redmond, Washington technology and marketing magic seems to have infused the company.
In the Visual Studio community there were once rumors that GitHub actually slowed things down. But now the Microsoft-owned company is aggressively touting robustly named Actions as a way to keep Agile workflows rolling.
Understanding Actions
GitHub Actions comes in the form of an API, which can be used to orchestrate any workflow, and support CI/CD, as explained in this Visual Studio magazine article. Microsoft promised that Actions would “let developers and others orchestrate workflows based on events and then let GitHub take care of the execution and details,” explained Converge360 editor David Ramel. “These workflows or pipelines can be just about anything to do with automated software compilation and delivery, including building, testing and deploying applications, triaging and managing issues, collaboration and more.”
This orchestration tool is something like the mythical wrench that works for every job you have.
“GitHub Actions now makes it easier to automate how you build, test, and deploy your projects on any platform, including Linux, macOS, and Windows,” Microsoft said in introducing the CI/CD capabilities this past summer. “Run your workflows in a container or in a virtual machine. Actions also supports more languages and frameworks than ever, including Node.js, Python, Java, PHP, Ruby, C/C++, .NET, Android, and iOS. Testing multi-container apps? You can now test your Web service and its database together by simply adding some docker-compose to your workflow file.”
GitHub offers help for developers as they begin to use the new technology. For example, when Actions are enabled on a repository, GitHub will provide suggestions for appropriate workflows according to the project.
GitHub Actions for Azure
Almost simultaneously with the announcement of the CI/CD capabilities, Microsoft also announced GitHub Actions for Azure with more support for developers new to Actions.
"You can find our first set of Actions grouped into four repositories on GitHub,” Microsoft said, “each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure." The initial list of repositories to check out included:
- azure/actions (login): Authenticate with an Azure subscription.
- azure/appservice-actions: Deploy apps to Azure App Services using the features Web Apps and Web Apps for Containers.
- azure/container-actions: Connect to container registries, including Docker Hub and Azure Container Registry, as well as build and push container images.
- azure/k8s-actions: Connect and deploy to a Kubernetes cluster, including Azure Kubernetes Service (AKS).
Getting Started with GitHub Actions at Microsoft HQ
If you want to get down to the nitty gritty with GitHub Actions, Mickey Gousset, DevOps Architect at Microsoft, is teaching a session on it at the Visual Studio Live! Microsoft HQ event this summer.
Gousset will show how GitHub Actions enables you to create custom software development lifecycle workflows directly in your GitHub repository. You can create tasks, called "actions", and combine them to create custom workflows to build, test, package, release and/or deploy any code project on GitHub. In this session you will learn the ins and outs of GitHub Actions, and walk away with workflows and ideas that you can start using in your own repos immediately.
You will learn:
- What GitHub Actions are and why you care
- How to build/release your GitHub repos using GitHub Actions
- Tips/tricks for your YAML files
Sign up for Visual Studio Live! Microsoft HQ today!
Posted by Richard Seeley on 03/24/20200 comments