Find and Delete Duplicate Rows in SQL [100% Working]


SQL

Reviewer: Deepak Prasad

Introduction

Databases are fundamental in storing and managing a vast amount of data efficiently. However, managing data comes with its challenges, one of which is dealing with duplicate rows. Duplicate rows, as the term suggests, are rows in a table that have identical values across various columns. They can often be the result of human error, such as multiple data entry, or system glitches. In SQL, having duplicate rows in your database can lead to complications and inaccuracies when analyzing data or generating reports. Therefore, learning how to delete duplicate rows in SQL becomes crucial for database integrity and accuracy.

Removing duplicate rows is not merely about cleaning up the data; it's about maintaining a high level of data quality and reliability. Being proficient in techniques to delete duplicate records in SQL ensures that your database operates efficiently and your queries yield accurate results. This skill is fundamental for database administrators, data analysts, and anyone working extensively with SQL databases. Throughout this tutorial, we'll delve deep into various methods and best practices to effectively delete duplicate rows and maintain a streamlined and efficient database.

 

Sample Table to Delete Duplicate Rows in SQL

This SQL command will create a table named Employees with columns: EmployeeID, FirstName, LastName, Age, and Department. It also inserts some rows into the table, including duplicate entries for illustrative purposes, like duplicate entries for 'Aarav Patel' and 'Vihaan Kumar'.

CREATE TABLE Employees (
    EmployeeID INT PRIMARY KEY NOT NULL,
    FirstName VARCHAR(255) NOT NULL,
    LastName VARCHAR(255) NOT NULL,
    Age INT,
    Department VARCHAR(255)
);

INSERT INTO Employees (EmployeeID, FirstName, LastName, Age, Department) VALUES 
(1, 'Aarav', 'Patel', 30, 'HR'),
(2, 'Aanya', 'Sharma', 25, 'Marketing'),
(3, 'Vihaan', 'Kumar', 35, 'IT'),
(4, 'Aarav', 'Patel', 30, 'HR'),
(5, 'Isha', 'Gupta', 28, 'Finance'),
(6, 'Advait', 'Deshpande', 32, 'Marketing'),
(7, 'Vihaan', 'Kumar', 35, 'IT'),
(8, 'Saumya', 'Singh', 29, 'Finance'),
(9, 'Aanya', 'Sharma', 25, 'Marketing');

 

Methods to Identify Duplicate Rows

Handling duplicate rows is a common but crucial task in database management. Before you can delete duplicate rows in SQL, identifying them is the first step. There are several ways to identify duplicates, and in this section, we will explore some common methods using the Employees table in SQL Server.

 

1. Using SELECT with GROUP BY

This method is useful to find columns with duplicate values. The GROUP BY clause groups the rows that have the same values in specified columns into aggregated data.

SELECT FirstName, LastName, Age, Department, COUNT(*)
FROM Employees
GROUP BY FirstName, LastName, Age, Department
HAVING COUNT(*) > 1;

This query will show you the duplicate rows in the Employees table. It helps to identify which records are repeated and thus assisting further to delete duplicate records in SQL.

 

2. Using SELECT with DISTINCT

The DISTINCT keyword allows you to see unique records in the specified columns, aiding in identifying which rows are duplicates.

SELECT DISTINCT FirstName, LastName, Age, Department
FROM Employees;

This command displays unique rows, helping visualize the redundancy and prepare for the process to delete duplicate rows in SQL.

 

3. Using COUNT() to Find Duplicates

Using the COUNT() function can help identify how many times each record has been duplicated.

SELECT FirstName, LastName, Age, Department, COUNT(*)
FROM Employees
GROUP BY FirstName, LastName, Age, Department
ORDER BY COUNT(*) DESC;

This command helps in quantifying the duplicates, an essential aspect to understand before you proceed to delete duplicate records in SQL.

 

Using DELETE Statement to Remove Duplicates

After identifying duplicate rows, the next crucial step in maintaining database integrity is to delete duplicate rows in SQL. One common method to achieve this is by using the DELETE statement in conjunction with other SQL clauses to specify and remove the unwanted duplicate records.

The general syntax involves using a subquery to identify the duplicate rows based on specific columns and then delete them.

Here's an example using the Employees table in SQL Server:

Deleting duplicates based on certain columns: If we want to delete duplicates by keeping one original record based on the FirstName, LastName, Age, and Department columns, we can execute the following query.

WITH CTE AS (
    SELECT 
        ROW_NUMBER() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as RowNumber,
        EmployeeID,
        FirstName,
        LastName,
        Age,
        Department
    FROM Employees
)
DELETE FROM CTE WHERE RowNumber > 1;

In this example, a Common Table Expression (CTE) is used to assign a unique RowNumber to each record within the partition of duplicate values. Then, it proceeds to delete duplicate records in SQL by removing rows where RowNumber is more than 1, ensuring that one unique record is retained.

Deleting all duplicates records including the original: In scenarios where you want to delete all occurrences of the duplicate rows, you can use the following query.

DELETE FROM Employees
WHERE EmployeeID IN (
    SELECT MIN(EmployeeID)
    FROM Employees
    GROUP BY FirstName, LastName, Age, Department
    HAVING COUNT(*) > 1
);

This query identifies and deletes all duplicates, including the original, based on the selected columns, aiding in the goal to delete duplicate rows in SQL.

 

Using Common Table Expressions (CTE) to Delete Duplicates

Common Table Expressions (CTE) offer a more readable and usable approach to both identify and delete duplicate rows in SQL. CTEs provide a way to define temporary result sets that you can easily reference within a DELETE statement to clarify which rows to remove, making the process of managing duplicates much more manageable.

A CTE is a temporary result set that you can reference within a SELECT, INSERT, UPDATE, or DELETE statement. CTEs make the queries more readable and modular by allowing you to define the CTE at the start of the query.

CTEs can be particularly helpful when you want to delete duplicate records in SQL by isolating the duplicates in a CTE and then specifying the deletion criteria.

Using CTE to Keep One Unique Record

WITH DuplicateRecordsCTE AS (
    SELECT 
        ROW_NUMBER() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as RowNumber,
        EmployeeID
    FROM Employees
)
DELETE FROM DuplicateRecordsCTE
WHERE RowNumber > 1;

In this query, the CTE DuplicateRecordsCTE assigns a unique row number to each record within a partition of duplicates. The DELETE statement then removes the duplicates, keeping one unique row from each set of duplicates, thus helping delete duplicate records in SQL.

Using CTE to Delete All Duplicate Records, Including the Original

WITH DuplicateRecordsCTE AS (
    SELECT 
        MIN(EmployeeID) as MinEmployeeID
    FROM Employees
    GROUP BY FirstName, LastName, Age, Department
    HAVING COUNT(*) > 1
)
DELETE FROM Employees
WHERE EmployeeID IN (SELECT MinEmployeeID FROM DuplicateRecordsCTE);

This query forms a CTE that identifies all duplicates, and then the DELETE statement removes all occurrences of the duplicates, ensuring that you effectively delete duplicate rows in SQL.

 

Using ROW_NUMBER() to Delete Duplicates

Utilizing the ROW_NUMBER() function is a practical approach to delete duplicate rows in SQL. This function assigns a unique sequential integer to rows within a result set based on the ORDER BY clause in the function.

ROW_NUMBER() is used to assign a unique sequential integer to rows within each partition of the result set. It is instrumental in identifying and isolating duplicates, making the process to delete duplicate records in SQL more straightforward.

ROW_NUMBER() OVER (
    PARTITION BY column1, column2, ...
    ORDER BY column3, column4, ...
)

Assigning Row Numbers to Identify Duplicates

SELECT 
    ROW_NUMBER() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as RowNumber,
    EmployeeID,
    FirstName,
    LastName,
    Age,
    Department
FROM Employees;

This query will assign a unique row number to each row, making duplicates identifiable for subsequent deletion.

Deleting Duplicates Using ROW_NUMBER()

WITH CTE AS (
    SELECT 
        ROW_NUMBER() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as RowNumber,
        EmployeeID
    FROM Employees
)
DELETE FROM CTE WHERE RowNumber > 1;

The ROW_NUMBER() function, in conjunction with a CTE, makes it more convenient to delete duplicate rows in SQL by isolating and removing the unnecessary duplicate records.

Best Practices

  • Partition Columns: Choose partition columns that define the uniqueness of a row, ensuring accurate identification of duplicates.
  • Ordering: Carefully select the ORDER BY columns within the ROW_NUMBER() function to define which duplicate rows to retain and which to delete.
  • Transaction Control: Consider using transactions to make the deletion process atomic, ensuring that you can rollback changes in case of mistakes.

By leveraging the ROW_NUMBER() function, you can efficiently manage and delete duplicate records in SQL, maintaining the accuracy and integrity of your SQL Server databases.

 

Using RANK() and DENSE_RANK() Functions

The RANK() and DENSE_RANK() functions in SQL Server are powerful tools that can be leveraged to delete duplicate rows in SQL. These ranking functions assign a unique rank to rows within a result set based on specified column values, facilitating the identification and removal of duplicate records.

Difference between RANK() and DENSE_RANK()

  • RANK(): Gives unique ranks to rows but leaves gaps in the ranking when there are duplicates.
  • DENSE_RANK(): Assigns ranks like RANK() but doesn’t leave gaps in the ranking sequence when there are duplicates.

Using RANK() and DENSE_RANK() to Identify Duplicates

Applying these functions can help you visualize the duplicates by assigning ranks to the rows.

SELECT 
    EmployeeID, FirstName, LastName, Age, Department,
    RANK() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as rnk,
    DENSE_RANK() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as drnk
FROM Employees;

Using RANK() and DENSE_RANK() to Delete Duplicate Records in SQL

WITH CTE AS (
    SELECT 
        RANK() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as rnk,
        EmployeeID
    FROM Employees
)
DELETE FROM CTE WHERE rnk > 1;

OR

WITH CTE AS (
    SELECT 
        DENSE_RANK() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as drnk,
        EmployeeID
    FROM Employees
)
DELETE FROM CTE WHERE drnk > 1;

Using these CTE queries with either RANK() or DENSE_RANK(), you can effectively delete duplicate rows in SQL by targeting the rows with a rank greater than 1.

 

Handling Duplicates in Joins

Joins are fundamental in SQL operations, allowing for the combination of rows from two or more tables based on related columns. However, joins can inadvertently introduce duplicates, necessitating strategies to delete duplicate rows in SQL.

Duplicates may emerge in the result of a join operation due to:

Let’s consider two tables, Employees and EmployeeProjects. Here is how you could create these tables and populate them with some dummy data.

-- Creating the Employees Table
CREATE TABLE Employees (
    EmployeeID INT PRIMARY KEY,
    FirstName VARCHAR(255),
    LastName VARCHAR(255)
);

-- Inserting data into the Employees Table
INSERT INTO Employees (EmployeeID, FirstName, LastName) VALUES
(1, 'Raj', 'Kumar'),
(2, 'Sunita', 'Sharma'),
(3, 'Amit', 'Patel');

-- Creating the EmployeeProjects Table
CREATE TABLE EmployeeProjects (
    ProjectID INT PRIMARY KEY,
    EmployeeID INT,
    ProjectName VARCHAR(255)
);

-- Inserting data into the EmployeeProjects Table
INSERT INTO EmployeeProjects (ProjectID, EmployeeID, ProjectName) VALUES
(1, 1, 'ProjectA'),
(2, 1, 'ProjectB'),
(3, 2, 'ProjectC');

If you perform a join between the Employees and EmployeeProjects tables to list the projects assigned to each employee, you might see duplicates because one employee can be associated with multiple projects.

SELECT 
    e.EmployeeID,
    e.FirstName,
    e.LastName,
    ep.ProjectName
FROM 
    Employees e
JOIN 
    EmployeeProjects ep ON e.EmployeeID = ep.EmployeeID;

Output:

EmployeeID | FirstName | LastName | ProjectName
-------------------------------------------------
1          | Raj       | Kumar    | ProjectA
1          | Raj       | Kumar    | ProjectB
2          | Sunita    | Sharma   | ProjectC

You can see that Raj Kumar appears twice in the result because he is associated with two projects. This is a simple case, and duplicates might become more complex with more extensive data and complicated join conditions.

1. Using DISTINCT

This approach involves using an aggregate function along with DISTINCT to ensure that each employee appears only once in the result set, selecting one project per employee arbitrarily. It can be seen as a method to delete duplicate rows in SQL in the output by minimizing the repetition of employee details.

SELECT 
    e.EmployeeID,
    e.FirstName,
    e.LastName,
    MAX(ep.ProjectName) as ProjectName
FROM 
    Employees e
LEFT JOIN 
    EmployeeProjects ep ON e.EmployeeID = ep.EmployeeID
GROUP BY 
    e.EmployeeID,
    e.FirstName,
    e.LastName;

2. Applying GROUP BY:

Another way to manage duplicates during joins is by using the GROUP BY clause. The GROUP BY clause allows you to group the rows that have the same values in specified columns into aggregated data. It can be used, such as when you want to know the number of projects each employee is working on, helping to organize and even delete duplicate rows in SQL indirectly.

SELECT 
    e.EmployeeID,
    e.FirstName,
    e.LastName,
    COUNT(ep.ProjectID) as NumberOfProjects
FROM 
    Employees e
JOIN 
    EmployeeProjects ep ON e.EmployeeID = ep.EmployeeID
GROUP BY 
    e.EmployeeID,
    e.FirstName,
    e.LastName;

3. Utilizing CTEs:

Common Table Expressions (CTEs) offer another approach to managing duplicates when working with joins. CTEs allow you to create temporary result sets that can be easily referenced within a SELECT, INSERT, UPDATE, or DELETE statement. CTEs can be used to neatly organize your logic, making it simpler to identify and delete duplicate rows in SQL queries involving joins.

WITH EmployeeProject_CTE AS (
    SELECT 
        e.EmployeeID,
        e.FirstName,
        e.LastName,
        ep.ProjectName,
        ROW_NUMBER() OVER(PARTITION BY e.EmployeeID ORDER BY ep.ProjectID) as RowNumber
    FROM 
        Employees e
    JOIN 
        EmployeeProjects ep ON e.EmployeeID = ep.EmployeeID
)
SELECT 
    EmployeeID,
    FirstName,
    LastName,
    ProjectName
FROM 
    EmployeeProject_CTE
WHERE 
    RowNumber = 1;

 

Automating Duplicate Removal

Automating the process to delete duplicate rows in SQL can be an effective way to maintain database integrity and data accuracy consistently. Automation can be achieved by creating stored procedures and scheduling them for regular execution.

 

1. Creating Stored Procedures

Stored procedures can encapsulate the logic for identifying and deleting duplicates, making the process reusable and easy to manage.

CREATE PROCEDURE sp_DeleteEmployeeDuplicates
AS
BEGIN
    WITH CTE AS (
        SELECT 
            ROW_NUMBER() OVER (PARTITION BY FirstName, LastName, Age, Department ORDER BY EmployeeID) as RowNumber,
            EmployeeID
        FROM Employees
    )
    DELETE FROM CTE WHERE RowNumber > 1;
END;

This stored procedure, sp_DeleteEmployeeDuplicates, will delete duplicate records in SQL from the Employees table when executed.

 

2. Scheduling Automatic Duplicate Removal

You can schedule the execution of the stored procedure using SQL Server Agent or another scheduling tool, ensuring that the procedure runs at specific intervals to automatically remove duplicates.

SQL Server Agent Job: Create a new job in SQL Server Agent, define the schedule, and add a step to execute the stored procedure.

EXEC sp_DeleteEmployeeDuplicates;

By setting up a scheduled job, you ensure that the system automatically executes the logic to delete duplicate rows in SQL, maintaining the data's cleanliness and integrity over time.

 

Best Practices for Automation

  • Error Handling: Include error-handling mechanisms in the stored procedure to manage potential issues gracefully.
  • Logging and Auditing: Implement logging within the stored procedure to keep a record of the rows deleted or any errors encountered.
  • Review and Adjustment: Regularly review and adjust the stored procedure and schedule to align with changing data and business requirements.

 

Considerations in Different SQL Database Systems

Different SQL database systems have unique syntax and functionalities, but the essential strategies to delete duplicate rows in SQL remain consistent across various platforms. Here’s how you can manage duplicates in popular database systems like MySQL, SQL Server, Oracle, and PostgreSQL.

 

Handling Duplicates in MySQL

MySQL supports the use of the DELETE statement with a subquery to identify and remove duplicates.

DELETE e1 
FROM Employees e1
JOIN Employees e2 
WHERE e1.EmployeeID < e2.EmployeeID AND e1.FirstName = e2.FirstName AND e1.LastName = e2.LastName;

This MySQL query will delete duplicate records in SQL by comparing rows within the Employees table.

 

Handling Duplicates in SQL Server

In SQL Server, you can use Common Table Expressions (CTEs) alongside the ROW_NUMBER() function to handle duplicates effectively.

WITH CTE AS (
    SELECT 
        ROW_NUMBER() OVER (PARTITION BY FirstName, LastName ORDER BY EmployeeID) as RowNumber,
        EmployeeID
    FROM Employees
)
DELETE FROM CTE WHERE RowNumber > 1;

This SQL Server example will delete duplicate rows in SQL, ensuring each employee appears only once based on their name.

 

Handling Duplicates in Oracle

Oracle also supports the utilization of CTEs and the ROW_NUMBER() function for managing duplicates.

DELETE FROM (
    SELECT ROW_NUMBER() OVER (PARTITION BY FirstName, LastName ORDER BY EmployeeID) as RowNumber
    FROM Employees
) WHERE RowNumber > 1;

This Oracle SQL query will delete duplicate records in SQL based on the employees’ names, ensuring uniqueness.

 

Handling Duplicates in PostgreSQL

PostgreSQL allows for the usage of CTEs in conjunction with the ROW_NUMBER() function to remove duplicates effectively.

WITH CTE AS (
    SELECT 
        ROW_NUMBER() OVER (PARTITION BY FirstName, LastName ORDER BY EmployeeID) as RowNumber,
        EmployeeID
    FROM Employees
)
DELETE FROM Employees WHERE EmployeeID IN (SELECT EmployeeID FROM CTE WHERE RowNumber > 1);

This query aims to delete duplicate rows in SQL in the PostgreSQL database, maintaining the integrity of the Employees table.

 

Frequently Asked Questions (FAQs)

Can duplicates in SQL be avoided during data insertion?

Yes, duplicates can often be avoided during data insertion by applying unique constraints or using primary keys on the columns that uniquely identify a record. For instance, you can create a unique index on columns to ensure that the combination of values in these columns is unique, hence preventing duplicate entries.

What is the impact of duplicates in a database?

Duplicates in a database can lead to inaccurate results, data inconsistency, and increased storage usage. They can also cause complications in data retrieval processes and analyses, making it difficult to obtain correct and reliable information from the database.

What is the role of the DISTINCT keyword in managing duplicates?

The DISTINCT keyword in SQL is used to return unique records in the output of a SELECT statement. It helps in eliminating duplicate rows in the result set, allowing you to view or work with unique records only.

Can I use the GROUP BY clause for handling duplicates?

Yes, the GROUP BY clause can indirectly assist in handling duplicates by grouping identical rows together. While it doesn’t delete duplicates, it can help in presenting a view of the data where the duplicates are aggregated, often used with aggregate functions like COUNT, SUM, AVG, etc.

Is it necessary to back up data before performing duplicate deletion operations?

It’s advisable to back up data before performing deletion operations, including removing duplicates. Backing up ensures that you can restore the database to its previous state in case something goes wrong during the deletion process, preserving data integrity.

How does the ROW_NUMBER() function assist in deleting duplicates?

The ROW_NUMBER() function assigns a unique sequential number to rows within a result set. When used with partitioning, it helps in identifying duplicates, allowing you to keep one instance of each duplicate set and delete the rest, assisting in the process to delete duplicate rows in SQL.

Are there any third-party tools available for managing duplicates in SQL databases?

Yes, there are third-party tools available that can assist in identifying and managing duplicates in SQL databases. These tools often come with graphical user interfaces and additional features that can simplify the process of finding and removing duplicate records. However, a good understanding of SQL and the database’s structure is essential for using these tools effectively.

 

Conclusion

Throughout this guide, various methods and strategies have been explored to delete duplicate rows in SQL. From identifying duplicates using methods such as GROUP BY and DISTINCT, to deleting them using techniques like Common Table Expressions (CTEs), ROW_NUMBER(), and more, a wide range of approaches have been detailed to help you delete duplicate records in SQL effectively.

Key Takeaways:

  • Identifying duplicates accurately is crucial. Different SQL functions and clauses like DISTINCT, GROUP BY, and window functions like ROW_NUMBER(), RANK(), and DENSE_RANK() can be instrumental in this aspect.
  • Multiple methods exist for deleting duplicates, each with its benefits and considerations. Techniques such as using DELETE statements with subqueries, CTEs, and window functions are vital tools in your SQL arsenal.
  • Consistency and adaptability are crucial. Databases evolve, and having a solid foundational understanding paired with continual learning and practice is key to managing them effectively.

 

Resources and Further Reading

  1. SQL Server Documentation
    • Official SQL Server Documentation: Explore comprehensive resources for SQL Server including T-SQL references, troubleshooting, and best practices.
    • T-SQL Reference: Dive deeper into Transact-SQL with the official documentation, including syntax, arguments, and examples.
  2. MySQL Documentation
    • Official MySQL Documentation: Access the official MySQL documentation for in-depth knowledge on various MySQL topics including the DELETE statement.
    • MySQL DELETE Statement: Learn the syntax and usage of the DELETE statement in MySQL for removing data from database tables.
  3. Oracle Database Documentation
    • Official Oracle Database Documentation: Find official Oracle documentation for comprehensive guidance on Oracle Database management and development.
    • SQL Language Reference: Enhance your SQL knowledge with Oracle’s official SQL language reference guide, covering various SQL commands and functions.
  4. PostgreSQL Documentation
    • Official PostgreSQL Documentation: Explore PostgreSQL’s official documentation for a wide array of resources including SQL command references.
    • SQL Command Reference: Get detailed insights into PostgreSQL SQL commands with official reference materials, examples, and syntax guides.

 

Falguni Thakker

Falguni Thakker

She is a dedicated professional with expertise in SQL, Python, C++, and Linux. Currently serving as a professor at a prestigious university. With a passion for teaching and a strong technical background, she inspires the next generation of computer scientists. You can connect with her on her LinkedIn profile.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

Leave a Comment