Feed aggregator

Clob vs Binary XML storage

Tom Kyte - 3 hours 35 min ago
Hello Team, While doing poc for storing XML in ClOB storage and Binary XML storage ,I could see storing XML in Binary XML takes less table space as compared to CLOB .As far as I know both store XML in LOB storage.so why there is difference betwee...
Categories: DBA Blogs

ora-24247 when making an https call

Tom Kyte - 3 hours 35 min ago
Hi, I have a problem when making an https call inside a package. It doesn't appear to recognise the privileges granted to access the acl. When I call utl_http.begin_request in an anonymous plsql block or in a procedure with authid defined as cu...
Categories: DBA Blogs

HA and Failover in Oracle RAC

Tom Kyte - 3 hours 35 min ago
Hello, Ask Tom Team. I have some many questions about Oracle RAC HA and Failover. I was reading the info in below link and it help me a lot. But I still have some questions. https://asktom.oracle.com/pls/apex/asktom.search?tag=failover-in-rac...
Categories: DBA Blogs

Complex Query

Tom Kyte - 3 hours 35 min ago
I have a large number of orders (200) involving around 2000 diferent products and need to group the in batches of 6 orders. The task is to identify the best possible groups of orders so performance (human performance) can be maximized. As a start...
Categories: DBA Blogs

Can we call a procedure in select statement with any restriction?

Tom Kyte - 3 hours 35 min ago
hi tom plz tell me in simple example explanation Can we restrict the function invoke in select statement. Can we call a procedure in select statement with any restriction?
Categories: DBA Blogs

Database Link to 9.2 Database from 19c

Bobby Durrett's DBA Blog - Fri, 2019-12-13 15:12

I have mentioned in previous posts that I am working on migrating a large 11.2 database on HP Unix to 19c on Linux. I ran across a database link to an older 9.2 database in the current 11.2 database. That link does not work in 19c so I thought I would blog about my attempts to get it to run in 19c. It may not be that useful to other people because it is a special case, but I want to remember it for myself if nothing else.

First, I’ll just create test table in my own schema on a 9.2 development database:

SQL> create table test as select * from v$version;

Table created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
PL/SQL Release 9.2.0.5.0 - Production
CORE	9.2.0.6.0	Production
TNS for HPUX: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

Next, I will create a link to this 9.2 database from a 19c database. I will hide the part of the link creation that has my password and the database details, but they are not needed.

SQL> create database link link_to_92
... removed for security reasons ...

Database link created.

SQL> 
SQL> select * from test@link_to_92;
select * from test@link_to_92
                   *
ERROR at line 1:
ORA-03134: Connections to this server version are no longer supported.

So I looked up ways to get around the ORA-03134 error. I can’t remember all the things I checked but I have a note that I looked at this one link: Resolving 3134 errors. The idea was to create a new database link from an 11.2 database to a 9.2 database. Then create a synonym on the 11.2 database for the table I want on the 9.2 system. Here is what that looks like on my test databases:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
... removed for brevity ...

SQL> create database link link_from_112
... removed for security ...

Database link created.

SQL> create synonym test for test@link_from_112;

Synonym created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

Now that I have the link and synonym on the 11.2 middleman database, I go back to the 19c database and create a link to the 11.2 database and query the synonym to see the original table:

SQL> select * from v$version;

BANNER                                                                           ...
-------------------------------------------------------------------------------- ...
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production           ...
...										    

SQL> create database link link_to_112
...

Database link created.
...
SQL> select * from v$version@link_to_112;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
...

SQL> select * from test@link_to_112;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

So far so good. I am not sure how clear I have been, but the point is that I could not query the table test on the 9.2 database from a 19c database without getting an error. By jumping through an 11.2 database I can now query from it. But, alas, that is not all my problems with this remote 9.2 database table.

When I first started looking at these remote 9.2 tables in my real system, I wanted to get an execution plan of a query that used them. The link through an 11.2 database trick let me query the tables but not get a plan of the query.

SQL> truncate table plan_table;

Table truncated.

SQL> 
SQL> explain plan into plan_table for
  2  select * from test@link_to_112
  3  /

Explained.

SQL> 
SQL> set markup html preformat on
SQL> 
SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

SQL> 
SQL> select object_name from plan_table;

OBJECT_NAME
------------------------------------------------------------------------------

TEST

Kind of funky but not the end of the world. Only a small number of queries use these remote 9.2 tables so I should be able to live without explain plan. Next, I needed to use the remote table in a PL/SQL package. For simplicity I will show using it in a proc:

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test@link_to_112;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

I tried creating a synonym for the remote table but got the same error:

SQL> create synonym test92 for test@link_to_112;

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

Finally, by chance I found that I could use a view for the remote synonym and the proc would compile:

SQL> create view test92 as select * from test@link_to_112;

View created.

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Procedure created.

SQL> SHOW ERRORS;
No errors.
SQL> 
SQL> execute bobbytest;

PL/SQL procedure successfully completed.

SQL> show errors
No errors.

Now one last thing to check. Will the plan work with the view?

SQL> explain plan into plan_table for
  2  select * from test92
  3  /

Explained.

SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

Sadly, the view was not the cure all. So, here is a summary of what to do if you have a procedure on a 19c database that needs to access a table on a 9.2 database:

  • Create a link on a 11.2 database to the 9.2 database
  • Create a synonym on the 11.2 database pointing to the table on the 9.2 database
  • Create a link on the 19c database to the 11.2 database
  • Create a view on the 19c database that queries the synonym on the 11.2 database
  • Use the view in your procedure on your 19c database
  • Explain plans may not work with SQL that use the view

Bobby

Categories: DBA Blogs

Updating the trail file location for Oracle GoldenGate Microservices

DBASolved - Fri, 2019-12-13 09:21

When you first install Oracle GoldenGate Microservices, you may have taken the standard installation approach and all the configuration, logging and trail file information will reside in a standard directory structure.  This makes the architecture of your enviornment really easy.   Let’s say you want to identify what trail files are being used by the […]

The post Updating the trail file location for Oracle GoldenGate Microservices appeared first on DBASolved.

Categories: DBA Blogs

Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90

Oracle Press Releases - Thu, 2019-12-12 15:00
Press Release
Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90 Fusion ERP Cloud Revenue Up 37%; Autonomous Database Cloud Revenue Up >100%

Redwood Shores, Calif.—Dec 12, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2020 Q2 results. Total Revenues were $9.6 billion, up 1% in USD and in constant currency compared to Q2 last year. Cloud Services and License Support revenues were $6.8 billion, while Cloud License and On-Premise License revenues were $1.1 billion.

GAAP Operating Income was up 3% to $3.2 billion, and GAAP Operating Margin was 33%. Non-GAAP Operating Income was $4.0 billion, and non-GAAP Operating Margin was 42%. GAAP Net Income was $2.3 billion, and non-GAAP Net Income was $3.0 billion. GAAP Earnings Per Share was up 14% to $0.69, while non-GAAP Earnings Per Share was up 12% to $0.90.

Short-term deferred revenues were $8.1 billion. Operating Cash Flow was $13.8 billion during the trailing twelve months.

“We had another strong quarter in our Fusion and NetSuite cloud applications businesses with Fusion ERP revenues growing 37% and NetSuite ERP revenues growing 29%,” said Oracle CEO, Safra Catz. “This consistent rapid growth in the now multibillion dollar ERP segment of our cloud applications business has enabled Oracle to deliver a double-digit EPS growth rate year-after-year. I fully expect we will do that again this year.”

“It’s still early days, but the Oracle Autonomous Database already has thousands of customers running in our Gen2 Public Cloud,” said Oracle CTO, Larry Ellison. “Currently, our Autonomous Database running in our Public Cloud business is growing at a rate of over 100%. We expect that growth rate to increase dramatically as we release our Autonomous Database running on our Gen2 Cloud@Customer into our huge on-premise installed base over the next several months.”

The Board of Directors also declared a quarterly cash dividend of $0.24 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on January 9, 2020, with a payment date of January 23, 2020.

Q2 Fiscal 2020 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q2 results and fiscal 2020 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 4597628.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our earnings per share and our Autonomous Database business, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our success depends upon our ability to develop new products and services, integrate acquired products and services and enhance our existing products and services. (2) Our cloud strategy, including our Oracle Software-as-a-Service and Infrastructure-as-a-Service offerings, may adversely affect our revenues and profitability. (3) We might experience significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings. (4) If the security measures for our products and services are compromised and as a result, our customers' data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged, the IT services we provide to our customers could be disrupted, and customers may stop using our products and services, all of which could reduce our revenue and earnings, increase our expenses and expose us to legal claims and regulatory actions. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) Acquisitions present many risks and we may not achieve the financial and strategic goals that were contemplated at the time of a transaction. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 12, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Baltimore Gas & Electric and Oracle Reshape Peak Pricing Programs

Oracle Press Releases - Wed, 2019-12-11 07:00
Press Release
Baltimore Gas & Electric and Oracle Reshape Peak Pricing Programs

Redwood Shores, Calif.—Dec 11, 2019

Baltimore Gas & Electric (BGE) has launched a digital experience pilot for thousands of Baltimore residents who pay on and off peak rates for electricity. BGE is using Oracle Utilities Opower Behavioral Load Shaping Cloud Service to engage customers with a proactive, personalized experience designed to help them save on their utility bills. The new service encourages customers to shift their biggest everyday energy loads, such as running energy-intensive appliances and electric vehicle charging, to off peak times. With these tips, BGE customers can save money while helping reduce daily peak energy demand and supporting a cleaner, healthier grid.

“We know on peak and off peak rates can seem complex, and we have a responsibility to offer excellent service to customers who choose them,” commented Mark Case, VP of regulatory policy and strategy at BGE. “With this new service from Opower, we can deliver a better experience for these customers by helping them shift their energy load for improved power affordability and reliability, all while reducing emissions.”

Learn more about the new Opower Behavioral Load Shaping Service here.

Peak pricing programs have not traditionally provided the ongoing, personalized outreach customers need to help them shift their energy use and benefit from lower off-peak rates. Years of public evaluation data show programs that offered some outreach only left customers wanting more. With machine learning, user experience design, and customer engagement automation, Opower is reshaping this equation.

With Opower, BGE is providing residents new insight into how small behavior changes can create significant bill savings. Enrolled customers began receiving weekly digital communications that help them understand how their on and off peak rates work. Each customer receives continually evolving content like week-over-week spending comparisons, personalized information about their on and off peak spending, and adaptive, intelligent recommendations for shifting their largest energy loads in order to save money.

“On and off peak rates are nothing new—our industry has been implementing them for decades. Program evaluators have found again and again that customers with peak pricing are eager for better insights into their energy usage and their bills,” noted Dr. Ahmad Faruqui, principal and energy economist with The Brattle Group. “What’s new and different is the way in which enabling technologies boost customer awareness and price responsiveness. BGE and Opower are putting those learnings into practice and employing a smart experimental design that will expand our industry’s body of knowledge.”

BGE and Opower are running the program as a randomized control trial in order to yield novel, statistically significant peak pricing pilot results. Throughout the trial, BGE and Opower will be isolating and measuring the impact of the customer experience itself—discretely from the peak price signal—on bill savings, customer satisfaction, peak demand, and adoption of BGE programs and products that can help customers save even more. The trial started in Summer 2019. 

Several additional utilities in the U.S. are running the Opower Behavioral Load Shaping service this year. This is the fourth new product released by Opower recently, in addition to hundreds of new customer engagement features for utilities and their customers. Opower is the world’s most widely deployed utility customer engagement platform, providing energy data analytics on over two trillion meter reads and powering the utility customer experience for more than 60 million households.

Contact Info
Kristin Reeves
Oracle Corporation
+1 925 787 6744
kris.reeves@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1 925 787 6744

Wendy Wang

  • +1 979 216 8157

Updating parameter files from REST

DBASolved - Tue, 2019-12-10 10:12

One of the most important and time consuming things to do with Oracle GoldenGate is to build parameter files for the GoldenGate processes.  In the past, this required you to access GGSCI and run commands like: GGSCI> edit params <process group> After which, you then had to bounce the process group for the changes to […]

The post Updating parameter files from REST appeared first on DBASolved.

Categories: DBA Blogs

How to optimize a campaign to get the most out of mobile advertising

VitalSoftTech - Tue, 2019-12-10 09:54

  When marketing for a campaign, we must optimize it in the best way possible to get the most out of it. Otherwise, it is just advertising revenue going to waste. Same goes for mobile advertising. We are here to discuss the best mobile ad strategies. However, before we start, here is a question for […]

The post How to optimize a campaign to get the most out of mobile advertising appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Oracle Press Releases - Tue, 2019-12-10 09:00
Press Release
Oracle Challenger Series Returns to Southern California in 2020 with Newport Beach, Indian Wells Events

Indian Wells, Calif.—Dec 10, 2019

The Oracle Challenger Series today announced its return to Southern California for two events in early 2020. The third stop of the 2019-2020 series takes place at the Newport Beach Tennis Club on January 27 – February 2. The Indian Wells Tennis Garden hosts the final tournament on March 2-8.

Now in its third year, the Oracle Challenger Series helps up-and-coming American players secure both ranking points and prize money in the United States. The two American men and two American women who accumulate the most points over the course of the Challenger Series receive wild cards into the singles main draws at the BNP Paribas Open in Indian Wells. As part of the Oracle Challenger Series’ mission to grow the sport and make professional tennis events more accessible, each tournament is free and open to the public.

The Newport Beach and Indian Wells events will conclude the 2019-2020 Road to Indian Wells and are instrumental in determining which American players receive wild card berths at the 2020 BNP Paribas Open. At the halfway point of the Challenger Series, Houston champion Marcos Giron holds the top spot for the men. Usue Arconada is in first place for the women following an impressive showing in New Haven with finals appearances in both singles and doubles. Trailing just behind them are Tommy Paul, the men’s champion in New Haven, and CoCo Vandeweghe, the women’s runner-up in Houston.

The Newport Beach event has propelled its champions to career-defining seasons over the previous two years. Americans Taylor Fritz and Danielle Collins began their steady climb up the world rankings by capturing the titles at the 2018 inaugural event. Bianca Andreescu’s 2019 title marked the beginning of her meteoric rise to WTA stardom. Likewise, the Indian Wells event has featured some of the Challenger Series’ strongest player fields and produced champions Martin Klizan, Sara Errani, Kyle Edmund and Viktorija Golubic.

The Newport Beach tournament will also feature the Oracle Champions Cup which takes place on Saturday, February 1. Former World No. 1 and 2003 US Open Champion Andy Roddick; 10-time ATP Tour titlist and former World No. 4 James Blake; 2004 Olympic silver medalist and 6-time ATP Tour singles winner Mardy Fish; and 2005 US Open semifinalist Robby Ginepri headline the one-night tournament. The event consists of two one-set semifinals with the winners meeting in a one-set championship match.

Tickets to the Oracle Champions Cup go on-sale to the general public on Tuesday, December 17. Special VIP packages including play with the pros, special back-stage access and an exclusive player party are also available.

For more information about the Oracle Challenger Series visit oraclechallengerseries.com, and be sure to follow @OracleChallngrs on Twitter and @OracleChallengers on Instagram. To inquire about volunteer opportunities, including becoming a ball kid, please email oraclechallengerseries@desertchampions.com.

Contact Info
Mindi Bach
Oracle
mindi.bach@oracle.com
About the Oracle Challenger Series

The Oracle Challenger Series was established to help up-and-coming American tennis players secure both ranking points and prize money. The Oracle Challenger Series is the next chapter in Oracle’s ongoing commitment to support U.S. tennis for men and women at both the collegiate and professional level. The Challenger Series features equal prize money in a groundbreaking tournament format that combines the ATP Challenger Tour and WTA 125K Series.

The Oracle Challenger Series offers an unmatched potential prize of wild cards into the main draw of the BNP Paribas Open, widely considered the top combined ATP Tour and WTA professional tennis tournament in the world, for the top two American male and female finishers.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The Global Oracle APEX Community Delivers. Again.

Joel Kallman - Mon, 2019-12-09 16:59

Oracle was recently recognized as a November 2019 Gartner Peer Insights Customers’ Choice for Enterprise Low-Code Application Platform Market for Oracle APEX.  You can read more about that here.

I personally regard this a distinction for the global Oracle APEX community.  We asked for your assistance by participating in these reviews, and you delivered.  Any time we've asked for help or feedback, the Oracle APEX community has selflessly and promptly responded.  You have always been very gracious with your time and energy.

I was telling someone recently how I feel the Oracle APEX community is unique within all of Oracle, but I also find it to be unique within the industry.  It is the proverbial two-way partnership that many talk about but rarely live through their actions.  We remain deeply committed to our customers' personal and professional success - it is a mindset which permeates our team.  We are successful only when our customers and partners are successful.

Thank you to all who participated in the Gartner Peer Insights reviews - customers, partners who nudged their customers, and enthusiasts.  You, as a community, stand out amongst all others.  We are grateful for you.

Oracle Names Vishal Sikka to the Board of Directors

Oracle Press Releases - Mon, 2019-12-09 15:15
Press Release
Oracle Names Vishal Sikka to the Board of Directors

Redwood Shores, Calif.—Dec 9, 2019

Oracle (NYSE: ORCL) today announced that Dr. Vishal Sikka, founder and CEO of the AI company Vianai Systems, has been named to Oracle’s Board of Directors.  Before starting Vianai, Vishal was a top executive at SAP and the CEO of Infosys.

“The digital transformation of an enterprise is enabled by the rapid adoption of modern cloud applications and technologies,” said Oracle CEO Safra Catz. “Vishal clearly understands how Oracle’s Gen2 Cloud Infrastructure, Autonomous Database and Applications come together in the Oracle Cloud to help our customers drive business value and adapt to change. I am very happy that he will be joining the Oracle Board.”

“For years, the Oracle Database has been the heartbeat and life-blood of every large and significant organization in the world,” said Dr. Vishal Sikka. “Today, Oracle is the only one of the big four cloud companies that offers both Enterprise Application Suites and Secure Infrastructure technologies in a single unified cloud. Oracle’s unique position in both applications and infrastructure paves the way for enormous innovation and growth in the times ahead. I am excited to have the opportunity to join the Oracle Board, and be part of this journey.”

“Vishal is one the world’s leading experts in Artificial Intelligence and Machine Learning,” said Oracle Chairman and CTO Larry Ellison. “These AI technologies are key foundational elements of the Oracle Cloud’s Autonomous Infrastructure and Intelligent Applications. Vishal’s expertise and experience makes him ideally suited to provide strategic vision and expert advice to our company and to our customers. He is a most welcome addition to the Oracle Board.”

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com/investor or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Statement

Statements in this press release relating to Oracle’s future plans, expectations, beliefs, intentions and prospects are “forward-looking statements” and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (“SEC”) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC, by contacting Oracle Corporation’s Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 9, 2019. Oracle undertakes no duty to update any statement in light of new information or future events.

Oracle Health Sciences Participates in TOP Tech Sprint

Oracle Press Releases - Mon, 2019-12-09 07:00
Press Release
Oracle Health Sciences Participates in TOP Tech Sprint Could enable the use of open data and AI to match cancer patients with clinical trials and experimental therapies

Redwood Shores, Calif.—Dec 9, 2019

Clinical trials are an essential gateway for getting new cures to market. However, many patients struggle to find the right trials that meet their unique medical requirements. To explore better ways to match patients with the right trials, Oracle Health Sciences is once again participating in The Opportunity Project (TOP) Technology Sprint: Creating the Future of Health.

This year’s entry joins Oracle technology with de-identified precision oncology open data sets from the United States Department of Veterans Affairs and the National Cancer Institute. The demo will highlight how Artificial Intelligence (AI) and customer experience solutions could be used to connect cancer patients with available clinical trials and experimental therapies.

“It is paramount that we collaborate with our peers within the federal government and technology communities to collectively evaluate what innovative opportunities exist and to explore the potential applications AI and machine learning can offer to fight deadly diseases such as cancer,” said Steve Rosenberg, senior vice president and general manager, Oracle Health Sciences. “The opportunity to participate in the TOP challenge lets us apply Oracle solutions in new ways while also harnessing the learnings to benefit the lives of patients who need treatment in the future.”

Connecting Patients with Critical Trials

This year Oracle’s entry builds on the last technology sprint by leveraging open datasets to explore more deeply the applications of machine learning (ML) and AI. In addition, it demonstrates how features for prospective trial recruitment will work with appropriate identity protection.

Oracle’s submission uses a combination of Oracle Healthcare Foundation, Oracle CX Service, Oracle Policy Automation, Oracle Digital Assistant and Oracle Labs PGX: Parallel Graph AnalytiX solutions to create a demonstration that in the future might enable connecting patients and clinical staff through intuitive interfaces that provide data at the point of care. A graphical interface would allow physicians to track a patient’s care journey and would indicate which clinical trial options are available. It applies AI to standardize data from clinical trial requirement forms to specify eligibility criteria. The result can be a more simplified and personalized experience to help determine the best treatment for patients. Patients can also keep their identifying information from being shared, while allowing only their de-identified clinical data to be made available so they can receive information about new programs, clinical studies or therapies that may be of value to their care.

TOP is a 12-week technology development sprint that brings together technology developers, communities, and government to solve real-world problems using open data. TOP will host its Demo Day 2019 on December 10, 2019 at the U.S. Census Bureau in Suitland, MD.

Contact Info
Judi Palmer
Oracle
+1 650.784.4119
judi.palmer@oracle.com
Rick Cohen
Blanc & Otus
+1 212.885.0563
rick.cohen@blancandotus.com
About Oracle Health Sciences

Oracle Health Sciences breaks down barriers and opens new pathways to unify people and processes to bring new drugs to market faster. As a leader in Life Sciences technology, Oracle Health Sciences is trusted by 30 of the top 30 pharma, 10 of the top 10 biotech and 10 of the top 10 CROs for managing clinical trials and pharmacovigilance around the globe.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650.784.4119

Rick Cohen

  • +1 212.885.0563

Teri Meri Prem Kahani Cover | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Mon, 2019-12-09 03:13

My Son Dharun Performing at Improviser Music Studio

Teri Meri Prem Kahani Cover

DarkSide Cover | Keyboard Performance | by Dharun at Improviser Music Studio

Senthil Rajendran - Mon, 2019-12-09 03:11
My Son Dharun Performing at Improviser Music Studio

DarkSide Cover

Documentum – LDAP Config Object “certdb_location” has invalid value

Yann Neuhaus - Sun, 2019-12-08 02:00

In a previous blog, I talked about the automation of the LDAP/LDAPs creation. However, the first time that I applied these steps, I actually faced an issue and I couldn’t really get my head around it, at first. This will be a rather short post but I still wanted to share my thoughts because it might avoid you some headache. The issue is only linked to the SSL part of the setup so there is no problem with the basis non-secure LDAP communications.

So, after applying all the steps, everything went fine and I therefore tried to run the dm_LDAPSynchronization job to validate the setup. Doing so, the generated log file wasn’t so great:

[dmadmin@content-server-0 ~]$ cat $DOCUMENTUM/dba/log/repo01/sysadmin/LDAPSynchronizationDoc.txt
LDAPSynchronization Report For DocBase repo01 As Of 2019/09/22 12:45:57

2019-09-22 12:45:56:124 UTC [default task-79]: LDAP Synchronization Started @ Sun Sep 22 12:45:56 UTC 2019
2019-09-22 12:45:56:124 UTC [default task-79]:
2019-09-22 12:45:56:124 UTC [default task-79]: $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dmldap.jar
2019-09-22 12:45:56:125 UTC [default task-79]: ---------------------------------------------------------------------------------
2019-09-22 12:45:56:125 UTC [default task-79]: Product-Name : Content Server-LDAPSync
2019-09-22 12:45:56:125 UTC [default task-79]: Product-Version : 16.4.0110.0167
2019-09-22 12:45:56:125 UTC [default task-79]: Implementation-Version : 16.4.0110.0167
2019-09-22 12:45:56:125 UTC [default task-79]: ---------------------------------------------------------------------------------
2019-09-22 12:45:56:125 UTC [default task-79]:
2019-09-22 12:45:56:126 UTC [default task-79]: Preparing LDAP Synchronization...
2019-09-22 12:45:57:101 UTC [default task-79]: INFO: Job Status: [LDAP Synchronization Started @ Sun Sep 22 12:45:56 UTC 2019]
2019-09-22 12:45:57:120 UTC [default task-79]: INFO: Job Status updated
2019-09-22 12:45:58:415 UTC [default task-79]: INFO: List of Ldap Configs chosen for Synchronization
2019-09-22 12:45:58:415 UTC [default task-79]: INFO:    >>>0812d6878000252c - Internal_LDAP<<<
2019-09-22 12:45:58:415 UTC [default task-79]: INFO:
2019-09-22 12:45:58:418 UTC [default task-79]:
2019-09-22 12:45:58:418 UTC [default task-79]: ==================================================================================
2019-09-22 12:45:58:420 UTC [default task-79]: Starting Sychronization for ldap config object >>>Internal_LDAP<<< ...
2019-09-22 12:45:58:425 UTC [default task-79]: Unexpected Error. Caused by: [DM_LDAP_SYNC_E_EXCEPTION_ERROR]error:  "Ldap Config Property "certdb_location" has invalid value "ldap_chain"."
2019-09-22 12:45:58:426 UTC [default task-79]: ERROR: DmLdapException:: THREAD: default task-79; MSG: [DM_LDAP_SYNC_E_EXCEPTION_ERROR]error:  "Ldap Config Property "certdb_location" has invalid value "ldap_chain"."; ERRORCODE: 100; NEXT: null
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.getCertDbLocation(SynchronizationContextBuilder.java:859)
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.setCertDbLocation(SynchronizationContextBuilder.java:225)
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.buildSynchronizationContext(SynchronizationContextBuilder.java:49)
        at com.documentum.ldap.LDAPSync.prepareSync(LDAPSync.java:438)
        at com.documentum.ldap.LDAPSync.processJob(LDAPSync.java:238)
        at com.documentum.ldap.LDAPSync.execute(LDAPSync.java:80)
        at com.documentum.mthdservlet.DfMethodRunner.runIt(Unknown Source)
        at com.documentum.mthdservlet.AMethodRunner.runAndReturnStatus(Unknown Source)
        at com.documentum.mthdservlet.DoMethod.invokeMethod(Unknown Source)
        at com.documentum.mthdservlet.DoMethod.doPost(Unknown Source)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)
        at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
        at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
        at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
        at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
        at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
        at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)
        at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
        at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
        at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

2019-09-22 12:45:58:426 UTC [default task-79]: WARNING:   **** Skipping Ldap Config Object - Internal_LDAP ****
2019-09-22 12:45:58:775 UTC [default task-79]: Synchronization of ldap config object >>>Internal_LDAP<<< is finished
2019-09-22 12:45:58:775 UTC [default task-79]: ==================================================================================
2019-09-22 12:45:58:775 UTC [default task-79]:
2019-09-22 12:45:58:786 UTC [default task-79]: INFO: Job Status: [dm_LDAPSynchronization Tool had ERRORS at 2019/09/22 12:45:58. Total duration was 2 seconds.View the job's report for details.]
2019-09-22 12:45:58:800 UTC [default task-79]: INFO: Job Status updated
2019-09-22 12:45:58:800 UTC [default task-79]: LDAP Synchronization Ended @ Sun Sep 22 12:45:58 UTC 2019
2019-09-22 12:45:58:800 UTC [default task-79]: Session s2 released successfully
Report End  2019/09/22 12:45:58
[dmadmin@content-server-0 ~]$

 

After a bunch of checks inside the repository, everything seemed fine. All the objects had the correct content, the correct references, aso… However there was one thing that wasn’t exactly as per the KB6321243 and that was the extension of the Trust Chain file. If you look at the basis of SSL Certificate encodings, then there are two main possibilities: DER (binary = not readable) or PEM (ASCII = readable). In addition to that, you can also have files with CRT or CER extensions but they are always either DER or PEM encoded. The KB asks you to have a PEM encoded SSL Certificate so this file can technically have either a “.pem” or “.cer” or “.crt” extension, that’s almost synonymous. Therefore, here I was, thinking that I could keep my “.crt” extension for the PEM encoded SSL Trust Chain.

To validate that this was the issue, I switched my file to the “.pem” extension and updated the “dm_location” Object:

[dmadmin@content-server-0 ~]$ cd $DOCUMENTUM/dba/secure/ldapdb
[dmadmin@content-server-0 ldapdb]$ mv ldap_chain.crt ldap_chain.pem
[dmadmin@content-server-0 ldapdb]$ 
[dmadmin@content-server-0 ldapdb]$ iapi repo01 -U${USER} -Pxxx


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68780001402 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_location where object_name='ldap_chain'
...
3a12d68780002522
API> get,c,l,file_system_path
...
$DOCUMENTUM/dba/secure/ldapdb/ldap_chain.cer
API> set,c,l,file_system_path
SET> $DOCUMENTUM/dba/secure/ldapdb/ldap_chain.pem
...
OK
API> get,c,l,file_system_path
...
$DOCUMENTUM/dba/secure/ldapdb/ldap_chain.pem
API> save,c,l
...
OK
API> ?,c,UPDATE dm_job OBJECTS set run_now=true, set a_next_invocation=DATE(NOW) WHERE object_name='dm_LDAPSynchronization'
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API> exit
Bye
[dmadmin@content-server-0 ldapdb]$

 

With the above, I just changed the extension of the file on the file system and its reference in the “dm_location” Object. The last iAPI command triggered the dm_LDAPSynchronization job to run. Checking the new log file, the issue was indeed solved which confirmed that despite the fact that the Trust Chain was a PEM encoded certificate, it wasn’t enough. There is actually a hardcoded value/check inside Documentum, which forces you to use the “.pem” extension and nothing else:

[dmadmin@content-server-0 ldapdb]$ cat $DOCUMENTUM/dba/log/repo01/sysadmin/LDAPSynchronizationDoc.txt
LDAPSynchronization Report For DocBase repo01 As Of 2019/09/22 13:19:33

2019-09-22 13:19:30:360 UTC [default task-87]: LDAP Synchronization Started @ Sun Sep 22 13:19:30 UTC 2019
2019-09-22 13:19:30:360 UTC [default task-87]:
2019-09-22 13:19:30:361 UTC [default task-87]: $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dmldap.jar
2019-09-22 13:19:30:367 UTC [default task-87]: ---------------------------------------------------------------------------------
2019-09-22 13:19:30:367 UTC [default task-87]: Product-Name : Content Server-LDAPSync
2019-09-22 13:19:30:367 UTC [default task-87]: Product-Version : 16.4.0110.0167
2019-09-22 13:19:30:367 UTC [default task-87]: Implementation-Version : 16.4.0110.0167
2019-09-22 13:19:30:367 UTC [default task-87]: ---------------------------------------------------------------------------------
2019-09-22 13:19:30:367 UTC [default task-87]:
2019-09-22 13:19:30:370 UTC [default task-87]: Preparing LDAP Synchronization...
2019-09-22 13:19:32:425 UTC [default task-87]: INFO: Job Status: [LDAP Synchronization Started @ Sun Sep 22 13:19:30 UTC 2019]
2019-09-22 13:19:32:453 UTC [default task-87]: INFO: Job Status updated
2019-09-22 13:19:34:292 UTC [default task-87]: INFO: List of Ldap Configs chosen for Synchronization
2019-09-22 13:19:34:292 UTC [default task-87]: INFO:    >>>0812d6878000252c - Internal_LDAP<<<
2019-09-22 13:19:34:292 UTC [default task-87]: INFO:
2019-09-22 13:19:34:294 UTC [default task-87]:
2019-09-22 13:19:34:294 UTC [default task-87]: ==================================================================================
2019-09-22 13:19:34:297 UTC [default task-87]: Starting Sychronization for ldap config object >>>Internal_LDAP<<< ...
2019-09-22 13:19:35:512 UTC [default task-87]: INFO: Directory Type: Sun ONE Directory Server ...
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: Ldap Connection: SSL connection
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: ldap://ldap.domain.com:636
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: {java.naming.provider.url=ldap://ldap.domain.com:636, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory}
2019-09-22 13:19:35:597 UTC [Thread-91752]: INFO: DM_LDAP_IGNORE_HOSTNAME_CHECK environment variable is enabled.
2019-09-22 13:19:35:598 UTC [Thread-91752]: INFO: Skipping hostname check
2019-09-22 13:19:35:598 UTC [Thread-91752]: INFO: DctmTrustMangaer.checkServerTrusted(): Successfully validated the certificate chain sent from server.
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: DM_LDAP_IGNORE_HOSTNAME_CHECK environment variable is enabled.
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: Skipping hostname check
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: DctmTrustMangaer.checkServerTrusted(): Successfully validated the certificate chain sent from server.
2019-09-22 13:19:35:663 UTC [default task-87]: INFO: LDAP Search Retry: is_child_context = true
2019-09-22 13:19:35:663 UTC [default task-87]: INFO: LDAP Search Retry: Retry count = 1
2019-09-22 13:19:35:665 UTC [default task-87]: Starting the group synchronization...
...
2019-09-22 13:19:35:683 UTC [default task-87]: Group synchronization finished.
2019-09-22 13:19:35:683 UTC [default task-87]:
2019-09-22 13:19:35:683 UTC [default task-87]: INFO: Updating Last Run Time: [20190922131935Z]
2019-09-22 13:19:35:683 UTC [default task-87]: INFO: Updating Last Change No: [null]
2019-09-22 13:19:35:749 UTC [default task-87]: INFO: Ldap Config Object >>>>Internal_LDAP<<<< updated
2019-09-22 13:19:35:751 UTC [default task-87]: Disconnected from LDAP Server successfully.
2019-09-22 13:19:36:250 UTC [default task-87]: Synchronization of ldap config object >>>Internal_LDAP<<< is finished
2019-09-22 13:19:36:250 UTC [default task-87]: ==================================================================================
2019-09-22 13:19:36:250 UTC [default task-87]:
2019-09-22 13:19:36:265 UTC [default task-87]: INFO: Job Status: [dm_LDAPSynchronization Tool Completed with WARNINGS at 2019/09/22 13:19:36. Total duration was 6 seconds.]
2019-09-22 13:19:36:278 UTC [default task-87]: INFO: Job Status updated
2019-09-22 13:19:36:278 UTC [default task-87]: LDAP Synchronization Ended @ Sun Sep 22 13:19:36 UTC 2019
2019-09-22 13:19:36:278 UTC [default task-87]: Session s2 released successfully
Report End  2019/09/22 13:19:36
[dmadmin@content-server-0 ldapdb]$

 

A pretty annoying design but there is nothing you can do about it. Fortunately, it’s not hard to fix the issue once you know what’s the problem!

 

Cet article Documentum – LDAP Config Object “certdb_location” has invalid value est apparu en premier sur Blog dbi services.

Documentum – Automatic/Silent creation of LDAP/LDAPs Server Config Objects

Yann Neuhaus - Sun, 2019-12-08 02:00

If you have been working with Documentum, then you probably already created/configured an LDAP/LDAPs Server Config Object (or several) so that your users can be globally managed in your organization. There are several compatible LDAP Servers so I will just take one (Sun One/Netscpae/iPlanet Directory Server). To create this LDAP/LDAPs Server Config Object, you probably used Documentum Administrator because it’s simple and quick to setup, however that’s not enough for automation. In this blog, I will show and explain the steps needed to configure the same but without any need for DA.

The problem with DA is that it usually does some magic and you cannot always do exactly the same without it. Here, this also applies but to a smaller extent since it is only the SSL part (LDAPs) that needs specific steps. For this, there is a KB created by EMC some years ago (migrated to OpenText): KB6321243.

Before starting, let’s setup some parameters that will be used in this blog:

[dmadmin@content-server-0 ~]$ repo="repo01"
[dmadmin@content-server-0 ~]$ dm_location_name="ldap_chain"
[dmadmin@content-server-0 ~]$ file_path="$DOCUMENTUM/dba/secure/ldapdb/${dm_location_name}.pem"
[dmadmin@content-server-0 ~]$ ldap_server_name="Internal_LDAP"
[dmadmin@content-server-0 ~]$ ldap_host="ldap.domain.com"
[dmadmin@content-server-0 ~]$ ldap_ssl=1 #0 for LDAP, 1 for LDAPs
[dmadmin@content-server-0 ~]$ ldap_port=636
[dmadmin@content-server-0 ~]$ location=`if ((${ldap_ssl} == 1)); then echo ${dm_location_name}; else echo "ldapcertdb_loc"; fi`
[dmadmin@content-server-0 ~]$ ldap_principal="ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com"
[dmadmin@content-server-0 ~]$ ldap_pwd="T3stP4ssw0rd"
[dmadmin@content-server-0 ~]$ ldap_user_filter="objectclass=person"
[dmadmin@content-server-0 ~]$ ldap_user_class="person"
[dmadmin@content-server-0 ~]$ ldap_group_filter="objectclass=groupofuniquenames"
[dmadmin@content-server-0 ~]$ ldap_group_class="groupofuniquenames"

 

1. Preparation steps for LDAPs

The steps in this section are only needed in case you need to configure SSL communications between your LDAP Server and Documentum. It can be done upfront without any issues. So let’s start with setting up the environment. Without the use of DA, the only way you have to import/trust SSL Certificate for the LDAPs connection is by adding an environment variable named “DM_LDAP_CERT_FILE” and setting it to “1”. This will allow Documentum to use certificate files for the trust chain instead of doing what DA is doing (the magic part) that we cannot replicate.

It is a little bit out of scope for this blog but a second variable is often needed “DM_LDAP_IGNORE_HOSTNAME_CHECK” which drives the validation of the hostname. Setting this to “1” will disable the hostname check and therefore allow you to use an LDAP Server that is behind a Proxy or a Load Balancer. This would also be needed with a LDAP (non-secure).

[dmadmin@content-server-0 ~]$ grep DM_LDAP ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo $DM_LDAP_CERT_FILE -- $DM_LDAP_IGNORE_HOSTNAME_CHECK
--
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ echo "export DM_LDAP_CERT_FILE=1" >> ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo "export DM_LDAP_IGNORE_HOSTNAME_CHECK=1" >> ~/.bash_profile
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ grep DM_LDAP ~/.bash_profile
export DM_LDAP_CERT_FILE=1
export DM_LDAP_IGNORE_HOSTNAME_CHECK=1
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ source ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo $DM_LDAP_CERT_FILE -- $DM_LDAP_IGNORE_HOSTNAME_CHECK
1 -- 1
[dmadmin@content-server-0 ~]$

 

For the variables to take effect, you will need to restart the Repositories. I usually set everything up (LDAPs specific pieces + LDAP steps) and only then restart the repositories so it’s done once at the very end of the setup.

The next step is then to create/prepare the Trust Chain. In DA, you can import the Trust Chain one certificate at a time, the Root first and then the Intermediate one. While using “DM_LDAP_CERT_FILE=1” (so without DA), you can unfortunately use only one file per LDAP and therefore this file will need to contain the full Trust Chain. To do that, simply put in a file the content of both Root and Intermediate SSL Certificates one after the other. So in the end, you file should contain something like that:

[dmadmin@content-server-0 ~]$ vi ${dm_location_name}.pem
[dmadmin@content-server-0 ~]$ cat ${dm_location_name}.pem
-----BEGIN CERTIFICATE-----
<<<content_of_root_ca>>>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<<<content_of_intermediate_ca>>>
-----END CERTIFICATE-----
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ mv ${dm_location_name}.pem ${file_path}
[dmadmin@content-server-0 ~]$

 

Once you have the file, you can put it wherever you want with the name that you want but it absolutely needs to be a “.pem” extension. You can check this blog, which explains what happens if this isn’t the case and what needs to be done to fix it. As you can see above, I choose to put the file where DA is putting them as well.

The last step for this SSL specific part is then to create a “dm_location” Object that will reference the file that has been created so that the LDAP Server Config Object can use it and trust the target LDAP Server. Contrary to the LDAP Certificate Database Management in DA, which is global to all Repositories (so it needs to be done only one), here you will need to create the “dm_location” Object in all the Repositories that are going to use the LDAP Server. This can be done very easily via iAPI:

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
create,c,dm_location
set,c,l,object_name
${dm_location_name}
set,c,l,path_type
file
set,c,l,file_system_path
${file_path}
save,c,l
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6878000111b started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
3a12d68780002522
API> SET> ...
OK
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content-server-0 ~]$

 

The name of the “dm_location” Object doesn’t have to be the same as the name of the Trust Chain file. I’m just using the same here so it’s simpler to see the relation between both. These are the only steps that are specific to SSL communications between your LDAP Server and Documentum.

 

2. Global steps for LDAP

This section applies to all cases. Whether you are trying to setup an LDAP or LDAPs Server, then you will need to create the “dm_ldap_config” Object and everything else described below. As mentioned previously, I’m using one type of LDAP Server for this example (value of “dm_ldap_config.a_application_type“). If you aren’t very familiar with the settings inside the Repository, then the simplest thing to do to find out which parameters you would need (and the associated values) is simply to create one LDAP Config Object using DA. Once done, just dump it and you can re-use that same configuration in the future.

So let’s start with creating a sample LDAP Server Config Object:

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
create,c,dm_ldap_config
set,c,l,object_name
${ldap_server_name}
set,c,l,map_attr[0]
user_name
set,c,l,map_attr[1]
user_login_name
set,c,l,map_attr[2]
user_address
set,c,l,map_attr[3]
group_name
set,c,l,map_attr[4]
client_capability
set,c,l,map_attr[5]
user_xprivileges
set,c,l,map_attr[6]
default_folder
set,c,l,map_attr[7]
workflow_disabled
set,c,l,map_val[0]
uniqueDisplayName
set,c,l,map_val[1]
uid
set,c,l,map_val[2]
mail
set,c,l,map_val[3]
cn
set,c,l,map_val[4]
2
set,c,l,map_val[5]
32
set,c,l,map_val[6]
/Home/${uniqueDisplayName}
set,c,l,map_val[7]
false
set,c,l,map_attr_type[0]
dm_user
set,c,l,map_attr_type[1]
dm_user
set,c,l,map_attr_type[2]
dm_user
set,c,l,map_attr_type[3]
dm_group
set,c,l,map_attr_type[4]
dm_user
set,c,l,map_attr_type[5]
dm_user
set,c,l,map_attr_type[6]
dm_user
set,c,l,map_attr_type[7]
dm_user
set,c,l,map_val_type[0]
A
set,c,l,map_val_type[1]
A
set,c,l,map_val_type[2]
A
set,c,l,map_val_type[3]
A
set,c,l,map_val_type[4]
V
set,c,l,map_val_type[5]
V
set,c,l,map_val_type[6]
E
set,c,l,map_val_type[7]
V
set,c,l,ldap_host
${ldap_host}
set,c,l,port_number
${ldap_port}
set,c,l,person_obj_class
${ldap_user_class}
set,c,l,group_obj_class
${ldap_group_class}
set,c,l,per_search_base
${ldap_principal}
set,c,l,grp_search_base
${ldap_principal}
set,c,l,per_search_filter
${ldap_user_filter}
set,c,l,grp_search_filter
${ldap_group_filter}
set,c,l,bind_dn
${ldap_principal}
set,c,l,user_subtype
dm_user
set,c,l,deactivate_user_option
T
set,c,l,import_mode
groups
set,c,l,bind_type
bind_by_dn
set,c,l,ssl_mode
${ldap_ssl}
set,c,l,ssl_port
${ldap_port}
set,c,l,certdb_location
${location}
set,c,l,map_rejection[0]
2
set,c,l,map_rejection[1]
2
set,c,l,map_rejection[2]
0
set,c,l,map_rejection[3]
2
set,c,l,map_rejection[4]
0
set,c,l,map_rejection[5]
0
set,c,l,map_rejection[6]
2
set,c,l,map_rejection[7]
0
set,c,l,retry_count
3
set,c,l,retry_interval
3
set,c,l,failover_use_interval
300
set,c,l,r_is_public
F
set,c,l,a_application_type
netscape
set,c,l,a_full_text
T
save,c,l
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68780001123 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
0812d6878000252c
API> SET> ...
OK
...
...
...
[dmadmin@content-server-0 ~]$

 

Once the LDAP Server Config Object has been created, you can register it in the “dm_server_config” Objects. In our silent scripts, we have using the r_object_id of the object just created so that we are sure it is the correct value but below, for simplification, I’m using a select to retrieve the r_object_id based on the LDAP Object Name (so make sure it’s unique if you use the below):

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
?,c,update dm_server_config object set ldap_config_id=(select r_object_id from dm_ldap_config where object_name='${ldap_server_name}')
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6878000112f started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API> Bye
[dmadmin@content-server-0 ~]$

 

Then, it is time to encrypt the password of the LDAP Account used that is used for the “bind_dn” (${ldap_principal} above):

[dmadmin@content-server-0 ~]$ crypto_docbase=`grep ^dfc.crypto $DOCUMENTUM_SHARED/config/dfc.properties | tail -1 | sed 's,.*=[[:space:]]*,,'`
[dmadmin@content-server-0 ~]$ 
[dmadmin@content-server-0 ~]$ iapi ${crypto_docbase} -U${USER} -Pxxx << EOF
encrypttext,c,${ldap_pwd}
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase gr_repo
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68880001135 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
DM_ENCR_TEXT_V2=AAAAEHQfx8vF52wIC1Lg8KoxAflW/I7ZnbHwEDJCciKx/thqFZxAvIFNtpsBl6JSGmI4XKYCCuUl/NMY7BTsCa2GeIdUebL2LYfA/nJivzuikqOt::gr_repo
API> Bye
[dmadmin@content-server-0 ~]$

 

Finally, the only thing left is to create the file “$DOCUMENTUM/dba/config/${repo}/ldap_${dm_ldap_config_id}.cnt” and put in it the content of the encrypted password (the whole line “DM_ENCR_TEXT_V2=…::gr_repo“). As mentioned previously, after a small restart of the Repository, you should then be able to run your dm_LDAPSynchronization job. You might want to configure the job with some specific properties but that’s up to you.

With all the commands above, you have already a very good basis to automate the creation/setup of your LDAP/LDAPs Server without issue. In our automation, instead of printing the result of the iAPI commands to the console, we are usually putting that in a log file. With that, we can automatically retrieve the result of the previous commands and continue the execution based on the outcome so there is no need for any human interaction. In the scope of this blog, it was much more human friendly to display it directly.

Maybe one final note: the above steps are for a Primary Content Server. In case you are trying to do the same thing on a Remote Content Server (RCS/CFS), then some steps aren’t needed. For example, you will need to put the Trust Chain to the correct location but you won’t need to create the “dm_location” or “dm_ldap_config” Objects since they are inside the Repository and therefore already present.

 

Cet article Documentum – Automatic/Silent creation of LDAP/LDAPs Server Config Objects est apparu en premier sur Blog dbi services.

Installing Oracle 19c on Linux

Pete Finnigan - Sat, 2019-12-07 20:53
I needed to create a new 19c install yesterday for a test of some customer software and whilst I love Oracle products I have to say that installing the software and database has never been issue free and simple over....[Read More]

Posted by Pete On 06/12/19 At 04:27 PM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator