Экспорт pandas df в sqlite приводит к дублированию наборов данных вместо одного обновленного набора данных - PullRequest
0 голосов
/ 06 ноября 2018

Я загружаю фрейм данных pandas из файла csv в базу данных sqlite через sqlalchmemy. Начальное заполнение работает просто отлично, но когда я повторно запускаю следующий код, те же данные снова экспортируются, и база данных содержит два идентичных набора данных.

Как я могу изменить код, чтобы в базу данных загружались только новые или измененные данные?

import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Numeric, DateTime
from sqlalchemy.orm import sessionmaker
from datetime import datetime
import pandas as pd

# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"
engine = create_engine('sqlite:///historical_data3.db')
conn = engine.connect()
Base = declarative_base()

# Declaration of the class in order to write into the database. This structure is standard and should align with SQLAlchemy's doc.
class Timeseries_Values(Base):
    __tablename__ = 'Timeseries_Values'

    #id = Column(Integer)
    Date = Column(DateTime, primary_key=True)
    ProductID = Column(Integer, primary_key=True)
    Value = Column(Numeric)

    @property
    def __repr__(self):
        return "(Date='%s', ProductID='%s', Value='%s')" % (self.Date, self.ProductID, self.Value)



fileToRead = r'V:\PYTHON\ProjectDatabase\HistoricalDATA_V13.csv'
tableToWriteTo = 'Timeseries_Values'

# Panda to create a dataframe with ; as separator.
df = pd.read_csv(fileToRead, sep=';', decimal=',', parse_dates=['Date'], dayfirst=True)
# The orient='records' is the key of this, it allows to align with the format mentioned in the doc to insert in bulks.
listToWrite = df.to_dict(orient='records')

# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"

metadata = sqlalchemy.schema.MetaData(bind=engine, reflect=True)
table = sqlalchemy.Table(tableToWriteTo, metadata, autoload=True)

# Open the session
Session = sessionmaker(bind=engine)
session = Session()

# Insert the dataframe into the database in one bulk
conn.execute(table.insert(), listToWrite)

# Commit the changes
session.commit()

# Close the session
session.close()

1 Ответ

0 голосов
/ 13 ноября 2018

Это работает сейчас, я добавил код df.to_sql:

import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Numeric, DateTime
from sqlalchemy.orm import sessionmaker
from datetime import datetime
import pandas as pd

# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"
engine = create_engine('sqlite:///historical_data3.db')
conn = engine.connect()
Base = declarative_base()

# Declaration of the class in order to write into the database. This structure is standard and should align with SQLAlchemy's doc.
class Timeseries_Values(Base):
    __tablename__ = 'Timeseries_Values'

    #id = Column(Integer)
    Date = Column(DateTime, primary_key=True)
    ProductID = Column(Integer, primary_key=True)
    Value = Column(Numeric)


fileToRead = r'V:\PYTHON\ProjectDatabase\HistoricalDATA_V13.csv'
tableToWriteTo = 'Timeseries_Values'

# Panda to create a dataframe with ; as separator.
df = pd.read_csv(fileToRead, sep=';', decimal=',', parse_dates=['Date'], dayfirst=True)
# The orient='records' is the key of this, it allows to align with the format mentioned in the doc to insert in bulks.
listToWrite = df.to_dict(orient='records')

df.to_sql(name='Timeseries_Values', con=conn, if_exists='replace')

metadata = sqlalchemy.schema.MetaData(bind=engine, reflect=True)
table = sqlalchemy.Table(tableToWriteTo, metadata, autoload=True)

# Open the session
Session = sessionmaker(bind=engine)
session = Session()

# Insert the dataframe into the database in one bulk
conn.execute(table.insert(), listToWrite)

# Commit the changes
session.commit()

# Close the session
session.close()
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...