Oracle STARTUP GIVES ORA-1172 AND ORA-600[3020]

PROBLEM:
Received ora-1172: recovery of thread %s stuck at block 15902 of file 6
during a startup of their database after a backup.
It looks like their database was NOT shutdown normal before the
backup, but instead was aborted.
On trying to issue: recover datafile '<file 6>',
ct then receives ora-600[3020][402669086][1][64][1]......
402669086 being the dba mentioend in ora-1172 above.
=========================
DIAGNOSTIC ANALYSIS:
We received ct's database and tried the recover it.
We got the same problems and did got a dd for block 15902.
Beginning of the block dump shows:
0000000 0601 0000 1800 3e1c 0000 007b 0000 000a  <<======
0000020 0000 0000 0100 0000 0000 0509 4000 65e5
0000040 0001 7b88 0001 0200 0000 0000 0002 0016
0000060 0000 05c7 0800 1055 004c 01df 8000 0001
0000100 4000 65e4 0001 0005 ffff 001c 01d7 01bb
0000120 01bb 0000 0005 068b 055e 0431 0304 01d7
0000140 0000 0000 0000 0000 0000 0000 0000 0000
 
traced recovery process:
received the following in the trace file:
RECOVERY OF THREAD 1 STUCK AT BLOCK 15902 OF FILE 6
REDO RECORD - Thread:1 RBA:0x000040:0x00000402:0x0076 LEN:0x0260 VLD:0x01
CONTINUE SCN scn: 1.40006607 02/24/97 14:19:12
CHANGE #3 CLASS:1 DBA:0x18003e1e INC:0x0000007b SEQ:0x00400007 OPCODE 11.2
buffer dba: 18003E1E inc: 7B seq: A ver: 1 type: 6=trans data
.... (rest of the trace file is included)
Also dumped the logfile ....
CHANGE #1 CLASS:1 DBA:0x18003e1e INC:0x0000007b SEQ:0x00000001 OPCODE 13.6
ktsnb redo: seg:0x509 typ:1 inx:1
Can see changes being made all the way to SEQ:0x00000009.
 
QUESTION:  why is recovery stuck then on
CHANGE #3 CLASS:1 DBA:0x18003e1e INC:0x0000007b SEQ:0x00400007???
Where is this coming from?
=========================

=========================
REPRODUCIBLE?:
Yes, with 7.1.3 and 7.1.6.  I have ct's database
if needed.  Tried to recover in 7.1.3 and 7.1.6 and got the
same exact RECOVERY OF THREAD 1 STUCK AT BLOCK 15902 OF FILE 6
problem.

CUSTOMER IMPACT:
Ct needs to know why his recovery did NOT go through.
Although they did a shutdown abort, how could it have
messed up his database from doing normal crash recovery.
Ct needs to know what has happened to cause the
recovery to get "stuck".

=========================
WORKAROUND:
Ct had to rebuild his database, but because of export problems,
customer ended up using DUL.

=========================

For More Oracle DUL (data unloader) information :
Refer  http://parnassusdata.com/en/emergency-services  

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com


Oracle ASM COMMUNICATION ERROR CAUSING THE INSTANCE TO CRASH

ASM communication error has been reported by RDBMS which is leading to 
instance crash. This has happened couple of times in last few months. 
 
There are two kind of ASM communication error happened:
 
WARNING: ASM communication error: op 17 state 0x40 (21561)
WARNING: ASM communication error: op 0 state 0x0 (15055)
 


We are seeing this kind of crash frequently causing disruption in the service 
and availability of this critical production database.
 
DIAGNOSTIC ANALYSIS:
--------------------
This is a 4 Node RAC database. Last time issue occured on Instance 1.
 
Diagnostics Time frame to focus on:
===========================================
Wed Feb 27 10:29:50 2013 <==== ariesprd1 communciation failure with ASM 
reported
WARNING: ASM communication error: op 17 state 0x40 (21561)
..
..
WARNING: ASM communication error: op 0 state 0x0 (15055)
..
Wed Feb 27 12:56:04 2013
Errors in file
D:\ORABASE\diag\rdbms\ariesprd\ariesprd1\trace\ariesprd1_dbw0_10068.trc:
ORA-21561: OID generation failed
..
..
Wed Feb 27 12:56:04 2013 <===== leading to instance crash
System state dump requested by (instance=1, osid=10068 (DBW0)), 
summary=[abnormal instance termination].
System State dumped to trace file
D:\ORABASE\diag\rdbms\ariesprd\ariesprd1\trace\ariesprd1_diag_6420.trc
DBW0 (ospid: 10068): terminating the instance due to error 63997
 
 
WORKAROUND:
-----------
Generally, instance crash resolves the issue but last time it led to issue 
with block recovery (kind of logical corruption) causing the all four nodes 
to hang forever. 
 

This creates a kind of hang in the system till ultimately database instance  is crashing. 

Last crash has led to some block recovery issue and finally 
we  have to deploy DUl to retrieve the data 


For More Oracle DUL (data unloader) information :
Refer  http://parnassusdata.com/en/emergency-services  

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle ASM DISKGROUP WILL NOT MOUNT AFTER ADDING DISKS

This environment is using secure file. There is no backup due to customer
ordering wrong hardware from Sun to perform the backup to. Customer tried
to add 8 disks to the diskgroup. One of the disks added was slice2 which
had the partition table for the disk on it. After the add failed and they
realized what had happened they work with System Administrators and
according to customer successfully switched slice2 and slice6. After this
they used the disks to successfully create dummy diskgroup DATA3. The
diskgroup has critical production data and it not mounting is causing the
production database not to mount resulting in significant revenue loss for
the company. As there presently is no backup of this data and they are
using secure file DUL is not an option to extract the data from the failed
diskgroup. The diskgroup will not mount because disks that were just added
cannot be discovered. Last attempt by customer to use AMDU resulted in core
dumps and no AMDU output. Customer is request that the existing disk
headers for the disk be repaired so that they can get the diskgroup mounted
and then add the correct disks to the diskgroup.

 

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle ORA-600 [3020] , ORA-353 DURING DATABASE RECOVERY

this happened 3 times  on 3 different archivedlogs in a recent recovery.
Finally customers had to abort the recovery process and then use DUL to
rebuild the database. 

About DUL & third party DUL-like tools Refer  http://parnassusdata.com/en/emergency-services  for more info.
Are you sure that they have write-back caching rather than write-through
caching?  If so, how did they enable this?  Is this a function of the
hard drive they are using, or some special software they are using?
The reason this is important is that write-back caching is known to corrupt Oracle databases on all platforms, while write-through is safe.  The reason is that Oracle has to absolutely guaranteed that when NT claims a write is completed that the data is really on disk.  If data that Oracle thinks it has written is still in system memory when NT crashes, then the database will be unrecoverable at that point - the state of the database will be different from what the undo/redo logs claim it should be in.
 


If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle CORRUPTING DATABASE OPEN FAILS WITH ORA-704, ORA-376 AND ORA-1092

full database restore from hot backup taken 03-SEPT.
There has never been a backup of the undo tablespace so
it was not restored. We updated the init.ora parameters
to offline and corrupt the rollback segments and to allow
corruption at resetlogs. After restore before recover the
undo datafile (3) files 2, 103, and 103 were offline dropped.
The recovery was started on the remaining datafiles, recovery
from log sequence# 2559 and cancel after applying 2576.
This recovers the database to 09-SEPT. The undo datafile
number 2 is still failing validation even after offline
drop and _offline and _corrupt for all undo listed in the
alert.log since before the database backup.
 
DIAGNOSTIC ANALYSIS:
--------------------
init.ora parameters changed or added:
 
 undo_management = manual
 _corrupted_rollback_segments = (_SYSSMU1$, thru _SYSSMU11$)
 _offline_rollback_segments = (_SYSSMU1$, thru _SYSSMU11$)
 _allow_resetlogs_corruption = true
 max_dump_file_size=unlimited
 event = "10013 trace name context forever, level 10"
 event = "376 trace name errorstack level 3"
 
 create the controlfile to mount
    
 sql trace your recovery session
    
 SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';
 SQL> alter database datafile  
'/oracle/index03/oradata/medprod/undo_dat_01.dbf'
      offline drop;
 SQL> alter database datafile  
'/oracle/data04/oradata/medprod/undo_dat_02.dbf'
      offline drop;
 SQL> alter database datafile  
'/oracle/data01/oradata/medprod/undo_dat_02.dbf'
      offline drop;
 SQL> recover database until cancel using backup controlfile;
      cancel after 2576 is applied
 SQL> alter database open resetlogs;
 
Executed 10046 trace level 12 with event 376 set:
 
medprod_ora_52745.trc:

ksedmp: internal or fatal error
ORA-376: file 2 cannot be read at this time
ORA-1110: data file 2: '/oracle/index03/oradata/medprod/undo_dat_01.dbf'
Current SQL statement for this session:
select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1, 
spare2 from obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner 
is null and linkname is null and subname is null
 

Oracle DUL (data unloader) may help this case :

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com


ORA-600 [KTSSDRO_SEGMENT1] ON STARTUP – DB OPENS, BUT CRASHES IMMEDIATELY

CT was initially receiving an ORA-600[25012][1][3]
This was doing an insert into a user table.  CT attempted
to do an index rebuild on an index on the table, and the
instance crashed.  Now, all attempts to open the DB
result in ORA-600[ktssdro_segment1][1][12621530][0].
/*
* Since we are committing after freeing every eight extents it is
* possible that the number of extents as indicated by the seg$ entry
* is different from the number of used extents in uet$. This will
* happen if the earlier instance had crashed in the midst of freeing
* extents. However since the segment header is itself freed only at
* the end the extent number should not be zero
*/
   ASSERTNM3(key.ktsukext != 0, OERINM("ktssdro_segment1"),
          segtid->tsn_ktid, segtid->dba_ktid, key.ktsukext);
   KSTEV4(key.ktsuktsn, key.ktsukfno, key.ktsukbno, key.ktsukext,
         KSTID(409));
    }
 
From reading this, it looks like a possible corruption in UET$ or seg$ ?
I have suggested that CT set Event 10061 to disable SMON freeing up free
extents.  This would mean no deletes from uet$, but not sure if this will
solve it. 
 
Unfortunately, CT does not have a good backup or backup strategy.
unload data using ORACLE DUL Data Unloader.

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com


ORA-00600 [KDDUMMY_BLKCHK] RECURRING

The exact error in that file is:
ORA-600: internal error [kddummy_blkchk], [255], [2392], [6401]
the error 6401 was detected for file 255 block 2392.
 
Block Checking: DBA = 1069549912, Block Type = KTB-managed data block
*** actual free space = 2612 < kdxcoavs = 2617
---- end index block validation
rechecking block failed with error code 6401
 
The current sql is a parallel select, the trace file is from a pq slave.
The stack trace suggests we are doing block clean out.
 
The block dumps shows an IOT with 6 key and 3 non key columns.
 
These are indeed all the symptoms of bug 6646613, so this situation is caused
 
by bug 6646613.
 
Checking the backport files:
43810, 0000,  "check-and-skip corrupt blocks in index scans"
 
The event is implemented in kdi.c and kdir.c and it sets some flags,
that should cause the block to be checked and skipped if corrupt.
 
But that scenario does not apply here, in this case we are cleaning
the block, and only then the corruption visible, after the cleanout.




The problem is that the blocks are already corrupt but our code
does not detect it, until the blocks are cleaned out.
 
In general the problem is a small difference in the available free
space. If the blocks are not updated any further they can still
be queried and give the correct results.
 
A further update can seriously corrupt a block as we then possibly
try to put in a row for which there in reality is no space and
severely thrashing the block in the process, by overwriting important
structures.
 
To salvage the data you either use DUL or
 clone the database and disable all events and block checking.
You can then introduce corruptions but you can query the data
so you can use any method to salvage the data.
We advise to do this on a clone to protect you from
unexpected side effects.
Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

ORA-600 [KTSPSCANINIT-D]

PROBLEM:
--------
Ran the script and came back with table_act_entry.
Can select from table_act_entry table with rownum<200 but other rownum 
options give error. Also attempted to analyze "SA"."TABLE_ACT_ENTRY" table 
but receive same ORA-600 error. You can see from pasted data.  Can't anaylze 
table_act_entry?
 
SQL> select * from TABLE_ACT_ENTRY where rownum>100 and rownum<150;
select * from TABLE_ACT_ENTRY  where rownum>100 and rownum<150
              *
ERROR at line 1:
ORA-600: internal error code, arguments: [ktspScanInit-d], [35758097], 
[],[], [], [], [], []
 
SQL> select * from TABLE_ACT_ENTRY where rownum>18315000;
select * from TABLE_ACT_ENTRY where rownum>18315000
              *
ERROR at line 1:
ORA-600: internal error code, arguments: [ktspScanInit-d], [35758097], 
[],[], [], [], [], []
 
SQL>  analyze table "SA"."TABLE_ACT_ENTRY" validate structure cascade;
 analyze table "SA"."TABLE_ACT_ENTRY" validate structure cascade
*
ERROR at line 1:
ORA-600: internal error code, arguments: [ktspScanInit-d], [35758097], 
[],[], [], [], [], []
 
 
DIAGNOSTIC ANALYSIS:
--------------------
ERROR:              
   ORA-600 [ktspscaninit-d] [a]
 
 VERSIONS:
   versions 9.2
 
 DESCRIPTION:
 
   Oracle has encountered an inconsistency in the metadata for an ASSM
   (Automatic Segment Space Management) segment. 
 
   An ASSM segment has two Highwater marks, a Low Highwater mark (LHWM) and 
   a High Highwater mark (HHWM - this is the same as a traditional HWM).
 
   This error is raised when we fail to locate the Low Highwater mark block.
 
   Stored in the Segment header is information which identifies the Level 1
   Bitmap Block (L1BMB) for the LHWM block, this BMB is managing the range 
   of datablocks holds the LHWM block.
 
   If during a scan of the ranges in this L1BMB we fail to locate the LHWM
   block then this error is raised.
  
 ARGUMENTS:
   Arg [a] Block address of Low HWM block  
  
 FUNCTIONALITY:      
   TRANSACTION SEGMENT PAGETABLE
 
---------------
 
Tried to corrupt the bitmap and rebuild it - this did not work
 
WORKAROUND:
-----------
Drop table and import from backup - this is not an option
the table is critical to the complete operation of the database.



A tool 'DUL' was used to take table data and write it to a flat file, and now  they are trying to use SQLLOADER to load it back into a table;

Refer  http://parnassusdata.com/en/emergency-services  for more info.

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

Oracle system01.dbf corruption

my oracle database died. The datafile system01.dbf was corrupt after a power outage.

the server disks have several files corrupt including archivelogs

the backup was lost

Anyone know a strategy to recover this database?

if not I need to extract data from an oracle to dbf csv someone knows a tool to do this?

 

Depending on the ‘corruption’  and condition of the other files, Oracle support may be able to help you force the database open in an inconsistent manner just to export your data.  You will need to log an SR for them to work directly with you.
As for ‘extract data’, there is a service which field support performs.

 

There may be a possibility of data salvaging by usage of DUL (Data unloader) .If you have lost everything, Oracle Constulting may be able to assist using the DUL tool, but note it is at a cost above your normal support, so that should be a last resort.

There is also a third party tool available with the Abbreviation Oracle PRM-DUL
 (Data unloading by data extraction). Refer  http://parnassusdata.com/en/emergency-services  for more info..

 

Oracle How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?

Oracle Database – Enterprise Edition – Version 8.1.7.4 to 12.1.0.2 [Release 8.1.7 to 12.1]
Information in this document applies to any platform.
***Checked for relevance on 16-July-2015***

GOAL

How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?

SOLUTION

The assumption here is that we have exhausted all possible locations to find another good and valid copy or backup of the archivelog that we are looking for, which could be in one of the following:

  • directories defined in the LOG_ARCHIVE_DEST_n
  • another directory in the same server or another server
  • standby database
  • RMAN backup
  • OS backup

If the archivelog is not found in any of the above mentioned locations, then the approach and strategy on how to recover and open the database depends on the SCN (System Change Number) of the datafiles, as well as, whether the log sequence# required for the recovery is still available in the online redologs.

For the SCN of the datafiles, it is important to know the mode of the database when the datafiles are backed up. That is whether the database is open, mounted or shutdown (normally) when the backup is taken.

If the datafiles are restored from an online or hot backup, which means that the database is open when the backup is taken, then we must apply at least the archivelog(s) or redolog(s) whose log sequence# are generated from the beginning and until the completion of the said backup that was used to restore the datafiles.

However, if the datafiles are restored from an offline or cold backup, and the database is cleanly shutdown before the backup is taken, that means that the database is either not open, is in nomount mode or mounted when the backup is taken, then the datafiles are already synchronized in terms of their SCN. In this situation, we can immediately open the database without even applying archivelogs, because the datafiles are already in a consistent state, except if there is a requirement to roll the database forward to a point-in-time after the said backup is taken.

The critical key thing here is to ensure that all of the online datafiles are synchronized in terms of their SCN before we can normally open the database. So, run the following SQL statement, as shown below, to determine whether the datafiles are synchronized or not. Take note that we query the V$DATAFILE_HEADER, because we want to know the SCN recorded in the header of the physical datafile, and not the V$DATAFILE, which derives the information from the controlfile.

select status, checkpoint_change#,
to_char(checkpoint_time, ‘DD-MON-YYYY HH24:MI:SS’) as checkpoint_time,
count(*)
from v$datafile_header
group by status, checkpoint_change#, checkpoint_time
order by status, checkpoint_change#, checkpoint_time;

The results of the above query must return one and only one row for the online datafiles, which means that they are already synchronized in terms of their SCN. Otherwise, if the results return more than one row for the online datafiles, then the datafiles are still not synchronized yet. In this case, we need to apply archivelog(s) or redolog(s) to synchronize all of the online datafiles. By the way, take note of the CHECKPOINT_TIME in the V$DATAFILE_HEADER, which indicates the date and time how far the datafiles have been recovered.

 

It is also important to check the status of the datafiles. Sometimes although the SCN is the same for all files you still cannot open the database. The status can be checked via

select fhsta, count(*) from X$KCVFH group by fhsta;

You should expect to find zero, and 8192 for the system datafile. If the status is 1 or 64 it will be in backup mode and requires more recovery, other statuses should be referred to Oracle support.

 

The results of the query above may return some offline datafiles. So, ensure that all of the required datafiles are online, because we may not be able to recover later the offline datafile once we open the database in resetlogs. Even though we can recover the database beyond resetlogs for the Oracle database starting from 10g and later versions due to the introduction of the format “%R” in the LOG_ARCHIVE_FORMAT, it is recommended that you online the required datafiles now than after the database is open in resetlogs to avoid any possible problems. However, in some cases, we intentionally offline the datafile(s), because we are doing a partial database restore, or perhaps we don’t need the contents of the said datafile.

You may run the following query to determine the offline datafiles:

select file#, name from v$datafile
where file# in (select file# from v$datafile_header
where status=’OFFLINE’);

You may issue the following SQL statement to change the status of the required datafile(s) from “OFFLINE” to “ONLINE”:

alter database datafile <file#> online;

If we are lucky that the required log sequence# is still available in the online redologs and the corresponding redolog member is still physically existing on disk, then we may apply them instead of the archivelog. To confirm, issue the following query, as shown below, that is to determine the redolog member(s) that you can apply to recover the database:

set echo on feedback on pagesize 100 numwidth 16
alter session set nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;
select LF.member, L.group#, L.thread#, L.sequence#, L.status,
L.first_change#, L.first_time, DF.min_checkpoint_change#
from v$log L, v$logfile LF,
(select min(checkpoint_change#) min_checkpoint_change#
from v$datafile_header
where status=’ONLINE’) DF
where LF.group# = L.group#
and L.first_change# >= DF.min_checkpoint_change#;

If the above query returns no rows, because the V$DATABASE.CONTROLFILE_TYPE has a value of “BACKUP”, then try to apply each of the redolog membes one at a time during the recovery. You may run the following query to determine the redolog members:

select * from v$logfile;

If you have tried to apply all of the online redolog members instead of an archivelog during the recovery, but you always received the ORA-00310 error, as shown in the example below, then the log sequence# required for recovery is no longer available in the online redolog.

ORA-00279: change 189189555 generated at 11/03/2007 09:27:46 needed for thread 1
ORA-00289: suggestion : +BACKUP
ORA-00280: change 189189555 for thread 1 is in sequence #428Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+BACKUP/prmy/onlinelog/group_2.258.603422107
ORA-00310: archived log contains sequence 503; sequence 428 required
ORA-00334: archived log: ‘+BACKUP/prmy/onlinelog/group_2.258.603422107’

After trying all of the possible solutions mentioned above, but you still cannot open the database, because the archivelog required for recovery is either missing, lost or corrupted, or the corresponding log sequence# is no longer available in the online redolog, since they are already overwritten during the redolog switches, then we cannot normally open the database, since the datafiles are in an inconsistent state. So, the following are the 3 options available to allow you to open the database:

Option#1: Force open the database by setting some hidden parameters in the init.ora. Note that you can only do this under the guidance of Oracle Support with a service request. But there is no 100% guarantee that this will open the database. However, once the database is opened, then we must immediately rebuild the database. Database rebuild means doing the following, namely: (1) perform a full-database export, (2) create a brand new and separate database, and finally (3) import the recent export dump. When the database is opened, the data will be at the same point in time as the datafiles used. Before you try this option, ensure that you have a good and valid backup of the current database.

Option#2: If you have a good and valid backup of the database, then restore the database from the said backup, and recover the database by applying up to the last available archivelog. In this option, we will only recover the database up to the last archivelog that is applied, and any data after that are lost. If no archivelogs are applied at all, then we can only recover the database from the backup that is restored. However, if we restored from an online or hot backup, then we may not be able to open the database, because we still need to apply the archivelogs generated during the said backup in order to synchronize the SCN of the datafiles before we can normally open the database.

Option#3: Manually extract the data using the Oracle’s Data Unloader (DUL).    http://parnassusdata.com/en/emergency-services

 

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638

E-mail: service@parnassusdata.com

If the customer wants to pursue this approach, we need the complete name, phone# and email address of the person who has the authority to sign the work order in behalf of the customer.

沪ICP备14014813号-2

沪公网安备 31010802001379号